Turing machine model was the first successful step in the direction of establishing a computation model to enable design of efficient machines and algorithms. However, this model deals with countable sets and is not adequate for understanding the full power of algorithms that use the continuum concept in an essential way, such as the interior point methods. Various lower bound or inapproximability results in the Turing model do not show any "intrinsic" limitations of computing but only represent limitations of a particular framework. Careful study of the work of early pioneers—Turing, Von Neumann, and Gödel—shows that they were all aware of these limitations. A broader view of theory of computing is necessary to understand new algorithms and design of next generation computers. Continuum based algorithms have applications in many areas of engineering, optimization, computational physics, and areas of computer science such as artificial intelligence. Our research program in these algorithms has multiple components—new mathematical or algorithmic concepts, a software framework aimed at providing an "actionable" Knowledge Representation System for their implementation, theory, and computational experiments. Continuum computing also retains some of its advantages in simulations on current machines as an interim step.
Narendra Karmarkar received his BTech in Electrical Engineering from IIT Bombay in 1978, MS from the California Institute of Technology and PhD in Computer Science from UC Berkeley. He is the inventor of a polynomial algorithm for linear programming also known as the interior point method. The algorithm is a cornerstone in the field of Linear Programming. He published this result in 1984 while working for Bell Laboratories in New Jersey. Narendra was a Professor at TIFR Mumbai. He has received numerous awards including Paris Kanellakis Award in 2000 from ACM, and Fulkerson Prize in 1988 from AMS and Mathematical Programming Society. He is also a Distinguished Alumnus of UC Berkeley (1993) and IIT Bombay (1996). He is currently working on a new architecture for supercomputing. Some of the ideas are published in IEEE journals and Fab5 conference organized by MIT center for bits and atoms.