Mental Representation

To understand the nature of mental representation posited by cognitive scientists to account for various aspects of human and animal cognition (see Von Eckardt 1993 for a more detailed account), it is useful to first consider representation in general. Following Peirce (Hartshorne, Weiss, and Burks 1931-1958), we can say that any representation has four essential aspects: (1) it is realized by a representation bearer; (2) it has content or represents one or more objects; (3) its representation relations are somehow "grounded"; and (4) it can be interpreted by (will function as a representation for) some interpreter.

If we take one of the foundational assumptions of cognitive science to be that the mind/brain is a computational device (see COMPUTATIONAL THEORY OF MIND), the mental representation bearers will be computational structures or states. The specific nature of these structures or states depends on what kind of computer the mind/brain is hypothesized to be. To date, cognitive science research has focused on two kinds: conventional (von Neumann, symbolic, or rule-based) computers and connectionist (parallel distributed processing) computers (see COGNITIVE MODELING, SYMBOLIC and COGNITIVE MODELING, CONNECTIONIST). If the mind/brain is a conventional computer, then the mental representation bearers will be data structures. Kosslyn's (1980) work on mental IMAGERY provides a nice illustration. If the mind/brain is a connectionist computer, then the representation bearers of occurrent mental states will be activation states of connectionist nodes or sets of nodes. In the first case, representation is considered to be "local"; in the second, "distributed" (see DISTRIBUTED VS. LOCAL REPRESENTATION and McClelland, Rumelhart, and Hinton 1986). There may also be implicit representation (storage of information) in the connections themselves, a form of representation appropriate for dispositional mental states.

While individual claims about what our representations are about are frequently made in the cognitive science literature, we do not know enough to theorize about the semantics of our mental representation system in the sense that linguistics provides us with the formal SEMANTICS of natural language (see also POSSIBLE WORLDS SEMANTICS and DYNAMIC SEMANTICS). However, if we reflect on what our mental representations are hypothesized to explain -- namely, certain features of our cognitive capacities -- we can plausibly infer that the semantics of our mental representation system must have certain characteristics. Pretheoretically, human cognitive capacities have the following three properties: (1) each capacity is intentional, that is, it involves states that have content or are "about" something; (2) virtually all of the capacities can be pragmatically evaluated, that is, they can be exercised with varying degrees of success; and (3) most of the capacities are productive, that is, once a person has the capacity in question, he or she is typically in a position to manifest it in a practically unlimited number of novel ways. To account for these features, we must posit mental representations that can represent specific objects; that can represent many different kinds of objects -- concrete objects, sets, properties, events, and states of affairs in this world, in possible worlds, and in fictional worlds as well as abstract objects such as universals and numbers; that can represent both an object (in and of itself) and an aspect of that object (or both extension and intension); and that can represent both correctly and incorrectly. In addition, if we take the productivity of our cognitive capacities seriously, we must posit representations with constituent structure and a compositional semantics. (Fodor and Pylyshyn 1988 use this fact to argue that our mental representation system cannot be connectionist; see CONNECTIONISM, PHILOSOPHICAL ISSUES.)

Cognitive scientists are interested not only in the content of mental representations, but also in where this content comes from, that is, in what makes a mental representation of a tree have the content of being about a tree. Theories of what determines content are often referred to as this-or-that kind of "semantics." Note, however, that it is important to distinguish such "theories of content determination" (Von Eckardt 1993) from the kind of semantics that systematically describes the content being determined (i.e., the kind referred to in the previous paragraph).

There are currently five principal accounts of how mental representational content is grounded. Two are discussed elsewhere (see FUNCTIONAL ROLE SEMANTICS and INFORMATIONAL SEMANTICS. The remaining three are characterized below.

1. Structural isomorphism. A representation is understood to be "some sort of model of the thing (or things) it represents" (Palmer 1978). The representation (or more precisely, the representation bearer) represents aspects of the represented object by means of aspects of itself. Palmer (1978) treats both the representation bearer and the represented object as relational systems, that is, as sets of constituent objects and sets of relations defined over these objects. A representation bearer then represents a represented object under some aspect if there exists a set G of relations constituting the representation bearer and a set D of relations constituting the object such that G is isomorphic to D.

2. Causal historical (Devitt 1981; Sterelny 1990). Intended to apply only to the mental analogues of designational expressions, this account holds that a token designational "expression" in the LANGUAGE OF THOUGHT designates an object if there is a certain sort of causal chain connecting the representation bearer with the object. Such causal chains include perceiving the object, designating the object in natural language, and borrowing a designating expression from another person (see REFERENCE, THEORIES OF).

3. Biological function. In this account (Millikan 1984), mental representations, like animal communication signals, are "intentional icons," a form of representation that is "articulate" (has constituent structure and a compositional semantics) and mediates between producer mechanisms and interpreter mechanisms. The content of any given representation bearer will be determined by two things -- the systematic natural associations that exist between the family of intentional icons to which the representation bearer belongs and some set of representational objects, and the biological functions of the interpreter device. More specifically, a representation bearer will represent an object if the existence of a mapping from the representation bearer family to the object family is a condition of the interpreter device successfully performing its biological functions. Take the association between bee dances and the location of nectar relative to the hive. The interpreter device for bee dances consists of the gatherer bees, among whose biological functions are those adapted to specific bee dances, for example, finding nectar 120 feet to the north of the hive in response to, say, bee dance 23. The interpreter function can successfully perform its function, however, only if bee dance 23 is in fact associated with the nectar's being at that location.

It can be argued that for a mental entity or state to be a representation, it must not only have content, it must also be significant for the subject who has it. According to Peirce, a representation having such significance can produce an "interpretant" state or process in the subject, and this state or process is related to both the representation and the subject in such a way that, by means of the interpretant, what the representation represents can make a difference to the internal states and behavior of the subject. This aspect of mental representation has received little explicit attention; indeed, its importance and even its existence have been disputed by some. Nevertheless, many cognitive scientists hold that the interpretant of a mental representation, for a given subject, consists of all the possible (token) computational consequences, including both the processes and the results of these processes, contingent on the subject's actively "entertaining" that representation.

Cognitive scientists engaged in the process of modeling or devising empirical theories of specific cognitive capacities (or specific features of those capacities) often posit particular kinds of mental representations. For pedagogical purposes, Thagard (1995) categorizes representations into six main kinds, each of which is typically associated with certain types of computational processes: sentences or well-formed formulas of a logical system (see LOGICAL REASONING SYSTEMS); rules (see PRODUCTION SYSTEMS and NEWELL); representations of concepts such as frames; SCHEMATA; scripts (see CATEGORIZATION), analogies (see ANALOGY), images; and connectionist representations. Another popular distinction is between symbolic representation (found in "conventional" computational devices) and subsymbolic representation (found in connectionist devices). There is unfortunately no conceptually tidy taxonomy of representational kinds. Sometimes such kinds are distinguished by their computational or formal characteristics -- for example, local versus distributed representation in connectionist systems. Sometimes they are distinguished in terms of what they represent -- for example, phonological, lexical, syntactic, and semantic representation in linguistics and psycholinguistics. And sometimes both form and content play a role. Paivio's (1986) dual-coding theory claims that there are two basic modes of representation -- imagistic and propositional. According to Eysenck and Keane (1995), imagistic representations are modality- specific, nondiscrete, implicit, and involve loose combination rules, whereas propositional representations are amodal, discrete, explicit, and involve strong combination rules. The first contrast, modality-specific versus amodal, refers to the aspect under which the object is represented, hence to content; the other three contrasts all concern form.

Not all philosophers interested in cognitive science regard the positing of mental representations as being necessary or even unproblematic. Stich (1983) argues that if one compares a "syntactic theory of mind" (STM), which treats mental states as relations to purely syntactic mental sentence tokens and which frames generalizations in purely  formal or computational terms, with representational approaches, STM will win. Representational approaches, in his view, necessarily encounter difficulties explaining the cognition of young children, "primitive" folk, and the mentally and neurally impaired. STM does not. Nor is it clear that cognitive science ought to aim at explaining the sorts of intentional phenomena (capacities or behavior) that mental representations are typically posited to explain.

Even more damning critiques of mental representation can be found in Judge (1985) and Horst (1996). Judge accepts the Peirceian tripartite conception of representation according to which a representation involves a representation bearer R, an object represented O, and an interpretant I, but takes the interpretant to require an agent performing an intentional act such as understanding R to represent O, which causes problems for mental representation, in her view. Understanding R to represent O itself necessitates that the agent have nonmediated access to O. But, if we assume that all cognition is mediated by mental representation, this is impossible. (Another problem with this view of the interpretant, not discussed by Judge, is that it leads to an infinite regress of mental representations.)

Horst (1996) also believes that cognitive science's attempt to explain INTENTIONALITY by positing mental representations is fundamentally confused. Mental representations are usually taken to be symbols. But a symbol, in the standard semantic sense, involves conventions, both with respect to its meaning and with respect to its syntactic type. And because conventions themselves involve intentionality, intentionality cannot be explained by positing mental representations. An alternative is to treat "mental symbol" as a technical term. But Horst argues that, viewed in this technical way, the positing of mental representations also fails to be explanatory. Furthermore, even if such an alternative approach were to work, cognitive science would still be saddled with the conventionality of mental syntax.

See also

Additional links

-- Barbara Von Eckardt

References

Devitt, M. (1981). Designation. New York: Columbia University Press.

Eysenck, M. W., and M. T. Keane. (1995). Cognitive Psychology: A Student's Handbook. Hillsdale, NJ: Erlbaum.

Fodor, J. A., and Z. W. Pylyshyn. (1988). Connectionism and cognitive architecture: A critical analysis. In S. Pinker and J. Mehler, Eds., Connections and Symbols. Cambridge, MA: MIT Press, pp. 3-71.

Hartshorne, C., P. Weiss, and A. Burks, Eds. (1931-1958). Collected Papers of Charles Sanders Peirce. Cambridge, MA: Harvard University Press.

Horst, S. W. (1996). Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind. Berkeley: University of California Press.

Judge, B. (1985). Thinking About Things: A Philosophical Study of Representation. Edinburgh: Scottish Academic Press.

Kosslyn, S. M. (1980). Image and Mind. Cambridge, MA: Harvard University Press.

McClelland, J. L., D. E. Rumelhart, and G. E. Hinton, Eds. (1986). Parallel Distributed Processing: Explorations in the Microstructures of Cognition. 2 vols. Cambridge, MA: MIT Press.

Millikan, R. (1984). Language, Thought, and Other Biological Categories. Cambridge, MA: MIT Press.

Paivio, A. (1986). Mental Representations: A Dual Coding Approach. Oxford: Oxford University Press.

Palmer, S. E. (1978). Fundamental aspects of cognitive representation. In E. Rosch and B. Lloyd, Eds., Cognition and Categorization. Mahwah, NJ: Erlbaum.

Sterelny, K. (1990). The Representational Theory of Mind: An Introduction. Oxford: Blackwell.

Stich, S. P. (1983). From Folk Psychology to Cognitive Science. Cambridge, MA: MIT Press.

Thagard, P. (1995). Mind: Introduction to Cognitive Science. Cambridge, MA: MIT Press.

Von Eckardt, B. (1993). What is Cognitive Science? Cambridge, MA: MIT Press.

Further Readings

Anderson, J. R. (1983). The Architecture of Cognition. Cambridge, MA: Harvard University Press.

Anderson, J. R. (1993). Rules of the Mind. Hillsdale, NJ: Erlbaum.

Bechtel, W., and A. Abrahamson. (1991). Connectionism and the Mind: An Introduction to Parallel Processing in Networks. Oxford: Blackwell.

Block, N. (1986). Advertisement for a semantics for psychology. In P. A. French, T. E. Uehling, Jr., and H. K. Wettstein, Eds., Studies in the Philosophy of Mind, vol. 10. Minneapolis: University of Minnesota Press, pp. 615-678.

Cummins, R. (1989). Meaning and Mental Representation. Cambridge, MA: MIT Press.

Devitt, M., and K. Sterelny. (1987). Language and Reality. Cambridge, MA: MIT Press.

Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT Press.

Fodor, J. (1975). The Language of Thought. New York: Crowell.

Fodor, J. (1981). Representations. Cambridge, MA: MIT Press.

Fodor, J. (1987). Psychosemantics. Cambridge, MA: MIT Press.

Fodor, J. (1990). A Theory of Content and Other Essays. Cambridge, MA: MIT Press.

Genesereth, M. R., and N. J. Nilsson. (1987). Logical Foundations of Artificial Intelligence. Los Altos, Ca.: Kaufmann.

Hall, R. (1989). Computational approaches to analogical reasoning: A comparative analysis. Artificial Intelligence 39:39-120.

Holyoak, K. J., and P. Thagard. (1995). Mental Leaps: Analogy in Creative Thought. Cambridge, MA: MIT Press.

Johnson-Laird, P. N. (1983). Mental Models. Cambridge, MA: Harvard University Press.

Kosslyn, S. M. (1994). Image and Brain: The Resolution of the Imagery Debate. Cambridge, MA: MIT Press.

Lloyd, D. (1987). Mental representation from the bottom up. Synthèse 70:23-78.

Loar, B. (1981). Mind and Meaning. Cambridge: Cambridge University Press.

Millikan, R. (1989). Biosemantics. Journal of Philosophy 86:281-297.

Minsky, M. (1975). A framework for representing knowledge. In P. H. Winston, Ed., The Psychology of Computer Vision. New York: McGraw-Hill, pp. 211-277.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In R Spiro, B. Bruce, and W. Brewer, Eds., Theoretical Issues in Reading Comprehension. Hillsdale, NJ: Erlbaum, pp. 33-58.

Schank, R. C., and R. P. Abelson. (1977). Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures. Hillsdale, NJ: Erlbaum.

Searle, J. R. (1983). Intentionality. Cambridge: Cambridge University Press.

Smith, E. E., and D. L. Medin. (1981). Categories and Concepts. Cambridge, MA: Harvard University Press.