Computational Complexity

How is it possible for a biological system to perform activities such as language comprehension, LEARNING, ANALOGY, PLANNING, or visual interpretation? The COMPUTATIONAL THEORY OF MIND suggests that it is possible the same way it is possible for an electronic computer to sort a list of numbers, simulate a weather system, or control an elevator: the brain, it is claimed, is an organ capable of certain forms of COMPUTATION, and mental activities such as those listed above are computational ones. But it is not enough to say that a mental activity is computational to account for its physical or biological realizability; it must also be the sort of task that can be performed by the brain in a plausible amount of time.

Computational complexity is the part of computer science that studies the resource requirements in time and memory of various computational tasks (Papadimitriou 1994). Typically, these requirements are formulated as functions of the size of the input to be processed. The central tenet of computability theory (one form of the CHURCH-TURING THESIS) is that any computational task that can be performed by a physical system of whatever form, can be performed by a very simple device called a Turing machine. The central tenet of (sequential) complexity theory, then, is that any computational task that can be performed by any physical system in F(n) steps, where n is the size of the input, can be performed by a Turing machine in G(n) steps, where F and G differ by at most a polynomial. A consequence of this is that any task requiring exponential time on a Turing machine would not be computable in polynomial time by anything physical whatsoever. Because exponentials grow so rapidly, such a task would be physically infeasible except for very small inputs.

How might this argument be relevant to cognitive science? Consider a simple form of visual interpretation, for example. Suppose it is hypothesized that part of how vision works is that the brain determines some property P of the scene, given a visual grid provided by the retina as input. Let us further suppose that the task is indeed computable, but an analysis shows that a Turing machine would require an exponential number of steps to do so. This means that to compute P quickly enough on all but very small grids, the brain would have to perform an unreasonably large number of elementary operations per second. The hypothesis, then, becomes untenable: it fails to explain how the brain could perform the visual task within reasonable time bounds.

Unfortunately, this putative example is highly idealized. For one thing, computational complexity is concerned with how the demand for resources scales as a function of the size of the input. If the required input does not get very large (as with individual sentences of English, say, with a few notable exceptions), complexity theory has little to contribute. For another, to say that the resource demands scale exponentially is to say that there will be inputs that require exponential effort. However, these could turn out to be extremely rare in real life, and the vast majority may very well be unproblematic from a resource standpoint. Thus the actual tractability of a task depends crucially on the range of inputs that need to be processed.

But these considerations do not undermine the importance of complexity. To return to the visual interpretation example, we clearly ought not to be satisfied with a theory that claims that the brain computes P, but that in certain extremely rare cases, it could be busy for hours doing so. Either such cases cannot occur at all for some reason, or the brain is able to deal with them: it gives up, it uses heuristics, it makes do. Either way, what the brain is actually doing is no longer computing P, but some close approximation to P that needs to be shown adequate for vision, and whose complexity in turn ought to be considered.

Another complication that should be noted is that it has turned out to be extremely difficult to establish that a computational task requires an exponential number of steps. Instead, what has emerged is the theory of NP-completeness (Garey and Johnson 1979). This starts with a specific computational task called SAT (Cook 1971), which involves determining whether an input formula of propositional logic can be made true. While SAT is not known to require exponential time, all currently known methods do require an exponential number of steps on some inputs. A computational task is NP-complete when it is as difficult as SAT: an efficient way of performing it would also lead to an efficient method for SAT , and vice versa. Thus, all NP-complete tasks, including SAT itself, and including a very large number seemingly related to assorted mental activities, are strongly believed (but are not known) to require exponential time.

The constraint of computational tractability has ended up being a powerful forcing function in much of the technical work in artificial intelligence. For example, research in KNOWLEDGE REPRESENTATION can be profitably understood as investigating reasoning tasks that are not only semantically motivated and broadly applicable, but are also computationally tractable (Levesque 1986). In many cases, this has involved looking for plausible approximations, restrictions, or deviations from the elegant but thoroughly intractable models of inference based on classical LOGIC (Levesque 1988). Indeed, practical KNOWLEDGE-BASED SYSTEMS have invariably been built around restricted forms of logical or probabilistic inference. Similar considerations have led researchers away from classical decision theory to models of BOUNDED RATIONALITY (Russell and Wefald 1991). In the area of COMPUTATIONAL VISION, it has been suggested that visual attention is the mechanism used by the brain to tame otherwise intractable tasks (Tsotsos 1995). Because the resource demands of a task are so dependent on the range of inputs, recent work has attempted to understand what it is about certain inputs that makes NP-complete tasks problematic (Hogg, Huberman, and Williams 1996). For a less technical discussion of this whole issue, see Cherniak 1986.

See also

Additional links

-- Hector Levesque

References

Cherniak, C. (1986). Minimal Rationality. Cambridge, MA: MIT Press.

Cook, S. A. (1971). The complexity of theorem-proving procedures. Proceedings of the 3rd Annual ACM Symposium on the Theory of Computing, pp. 151-158.

Garey, M., and D. Johnson. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: Wiley, Freeman and Co.

Hogg, T., B. Huberman, and C. Williams. (1996). Frontiers in problem solving: Phase transitions and complexity. Artificial Intelligence 81 (1-2).

Levesque, H. (1986). Knowledge representation and reasoning. Annual Review of Computer Science 1:255-287.

Levesque, H. (1988). Logic and the complexity of reasoning. Journal of Philosophical Logic 17:355-389.

Papadimitriou, C. (1994). Computational Complexity. New York: Addison-Wesley.

Russell, S., and E. Wefald. (1991). Do the Right Thing. Cambridge, MA: MIT Press.

Tsotsos, J. (1995). Behaviorist intelligence and the scaling problem. Artificial Intelligence 75(2):135-160 .