Rules and Representations

Rules and representations is a label applied to the classical, computational approach in cognitive science, and more often to the general conception of cognition based on that approach.

The interdisciplinary field known as cognitive science grew up with the development of the modern digital computer. Researchers from such diverse fields as computer  science, neurophysiology, psychology, linguistics, and philosophy were brought together by the conviction that the digital computer is the best model of cognition in general, and consequently of human cognition in particular. (This "classical" conception of cognition is now rivaled by connectionism, the other principal branch of contemporary cognitive science; see COGNITIVE MODELING, CONNECTIONIST.)

A distinctive feature of the modern digital computer is that its processes are determined by a program -- a system of rules that govern transitions from one state to the next. (State transitions can be nondeterministic; for instance, the rules can determine the next state on the basis of both the current state and the outcome of consulting a random number table.) Computers process information represented in "data structures" or symbols, and it is these symbolic representations to which the rules apply. Thus, the classical conception of cognition gives a central and fundamental role to rules -- specifically, rules for the manipulation and transformation of symbolic representations. The contents of the symbolic representations are the contents of thought.

The rules executed by a digital computer can be regarded in two ways. From one perspective the rules refer explicitly to the task domain. For instance, a set of rules might assign classes to classrooms on a university campus, taking into account such constraints as the location of each classroom, the number of seats in each classroom, the number of requested seats for each class, and the time of day each class will be offered. These rules must be precise, completely explicit, and exceptionless, so that a human being who had no idea what classes or classrooms are could still determine room assignments by following the rules. To do this, one would need only the ability to follow simple instructions and the ability to perform elementary logical and mathematical operations. Terms for classes, rooms, and the like could be replaced by nonsense syllables or schematic letters.

When rules about the task domain have been made precise, explicit, and exceptionless in this way, they can be viewed from a second perspective: not as having representational content, but as purely formal symbol-manipulating rules that determine processing on the basis of nothing other than the syntactic form of the representations. The elementary operations specified in such rules can then be mirrored by simple physical devices. Hence, such rules can be put in the form of a program that will run on a conventional  computer. Both the rules constituting the program and the representations to which they apply thus have two comple-mentary "guises." The rules are purely formal, applying to representations solely on the basis of their structural- syntactic features; but the representations, and hence the rules, are also appropriately interpretable as being about objects and facts in the problem domain -- classes, classrooms, class times, and so on. It is because the rules have these two guises that the syntactic and the semantic aspects of symbolic representations hang together.

Classicism maintains that the rules that determine cognitive processes in cognitive systems also have these two guises (Haugeland 1981). On one hand they are interpretable as psychological laws governing transitions among mental states. But on the other hand they are purely formal, applying directly to the syntactic structure of symbolic representations. On this classicist picture, the brain is a "syntactic engine" in which the content-appropriate processing of mental representations is accomplished by means of the structure-sensitive processing of symbol-structures in accordance with formal-syntactic rules. A program that is intended to model a human cognitive capacity -- say, visual perception or parsing sentences or getting about in crowded shopping areas -- is a hypothesis about the states and processes that occur when a person exercises that capacity. The explanation of the capacity itself, according to classicism, is that a person has a (possibly hardwired) system of rules for performing the task, rules that constitute a program. (Frequently in classicist cognitive science, a program intended to model a cognitive capacity will involve heuristic rules; see HEURISTIC SEARCH. These do not guarantee a correct or optimal outcome in every case, but employ reasonable strategies that will yield solutions in a large range of cases.)

In connectionist models, representations are activation-states of nodes in a connectionist network. (Often the representations are distributed activation-patterns involving  multiple nodes; and sometimes the representations are fully distributed, in the sense that the individual nodes within a distributed representation have no determinate representational content by themselves.) Whereas classicism is firmly committed to mental representations with language-like syntactic structure, connectionist representations are not inherently syntactic. It is sometimes assumed that connectionist representations cannot exhibit syntactic structure, and that lack of syntax therefore constitutes an essential difference between connectionism and classicism (e.g., Churchland 1995). On this view, connectionist models depart from the classical "rules and representations" conception of cognition because they eschew traditional symbolic, syntactically structured representations. But in fact some connectionist models do employ syntactically structured representations and exhibit structure-sensitive processing -- although syntactic constituency in these models is not a simple part-whole relation. Examples of such models include Pollack (1990), Smolensky (1990), and Berg (1992). So connectionism need not eschew syntax -- and arguably should not (Horgan and Tienson 1996).

In connectionist models, rules for processing representations are not explicitly represented. It is sometimes assumed that classicism is committed to explicitly represented rules, and that lack of such rules therefore constitutes an essential difference between classicism and connectionism (e.g., Hatfield 1991). But although programs are explicitly represented as stored "data structures" in the ubiquitous general-purpose computer, stored programs are not an essential feature of the classical point of view. In some computational devices -- including, for example, many hand-held calculators -- the rules are all hardwired into the system and are not explicitly represented. According to classicism, cognition must conform to representation-processing rules that constitute a computer program; but a cognitive system could conform to such rules simply by being hardwired to do so. For example, from the classical perspective it is plausible to regard some innate processes as hardwired.

Classicism is committed to representation-level rules -- that is, programmable rules that apply directly to the formal structure of the representations themselves. But connection-ism, at least insofar as it employs fully distributed representations in which local node-activations do not have representational content, is not committed to such representation-level rules as a feature of cognitive architec-ture. The rules governing a connectionist network are local and subrepresentational, applying within and between individual nodes in the network -- not to the distributed rep-resentations. Although connectionist systems sometimes conform to emergent representational-level rules over and above these node-level rules, in general there is no guaran-tee that rule-describability of processing will "transfer up-ward" from the level of individual nodes to the level of distributed representations (Horgan and Tienson 1996: 63-67; Horgan 1997). In our view, it is neither necessary nor desirable for connectionist models of human cognition to conform to programmable representation-level rules; it is plausible that persistent problems within classicism -- e.g., the FRAME PROBLEM -- are a byproduct of classicism's commitment to such rules; and an appropriate nonclassical framework for cognitive science would be one in which the mathematics of dynamical systems theory replaces the classicist appeal to representation-level rules (Horgan and Tienson 1996).

See also

Additional links

-- Terence Horgan and John Tienson

References

Berg, G. (1992). A connectionist parser with recursive sentence structure and lexical disambiguation. In AAAI-92: Proceedings of the Tenth National Conference on Artificial Intelligence. Cambridge, MA: AAAI Press/MIT Press.

Churchland, P. (1995). The Engine of Reason, the Seat of the Soul: A Philosophical Journey into the Brain. Cambridge, MA: MIT Press.

Hatfield, G. (1991). Representation and rule-instantiation in connectionist systems. In T. Horgan and J. Tienson, Eds., Connectionism and the Philosophy of Mind. Kluwer.

Haugeland, J. (1981). Semantic engines: An introduction to mind design. In J. Haugeland, Ed., Mind Design: Philosophy, Psychology, Artificial Intelligence. Cambridge, MA: MIT Press.

Horgan, T. (1997). Modelling the noncomputational mind: Reply to Litch. Philosophical Psychology 10:365-371.

Horgan, T., and J. Tienson. (1996). Connectionism and the Philosophy of Psychology. Cambridge, MA: MIT Press.

Pollack, J. (1990). Recursive distributed representations. Artifical Intelligence 46:77-105.

Smolensky, P. (1990). Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artifical Intelligence 46:159-215.