The language of thought hypothesis is an idea, or family of ideas, about the way we represent our world, and hence an idea about how our behavior is to be explained. Humans are marvelously flexible organisms. The commuter surviving her daily trip to New York, the subsistence agriculturalist, the inhabitant of a chaotic African state all thread their different ways through the maze of their day. This ability to adapt to a complex and changing world is grounded in our mental capacities. We navigate our way through our social and physical world by constructing an inner representation, an inner map of that world, and we plot our course from that inner map and from our representation of where we want to get to. Our capacity for negotiating our complex and variable environment is based on a representation of the world as we take it to be, and a representation of the world as we would like it to be. In the language of FOLK PSYCHOLOGY -- our everyday set of concepts for thinking about ourselves and others -- these are an agent's beliefs and desires. Their interaction explains action. Thus Truman ordered the bombing of Japan because he wanted to end World War II as quickly as possible, and he believed that bombing offered his best chance of attaining that end.
We represent -- think about -- many features of our world. We have opinions on politics, football, food, the best way to bring up children, and much more. Our potential range of opinion is richer still. You may not have had views on the pleasures of eating opossum roadkills, but now that you are prompted, you quickly will. This richness of our cognitive range is important to the language of thought hypothesis. Its defenders take our powers of MENTAL REPRESENTATION to be strikingly similar to our powers of linguistic representation. Neither language nor thought are stimulus bound: we can both speak and think of the elsewhere and the elsewhen. Both language and thought are counterfactual: we can both speak and think of how the world might be, not just how it is. We can misrepresent the world; we can both say and think that it is infested by gods, ghosts, and dragons. Moreover, thoughts and sentences can be indefinitely complex. Of course, if sentences are too long and complex, we cease to understand them. But this limit does not seem intrinsic to our system of linguistic representation but is instead a feature of our capacities to use it. Under favorable circumstances, our capacity to handle linguistic complexity extends upwards; in unfavorable circumstances, downwards. Moreover, the boundary between the intelligible and the unintelligible is fuzzy and not the result of hitting the system's walls. The same seems true of mental representation. These similarities are no surprise. Although there may be thoughts we cannot express, surely there are no utterances we cannot think.
The power of linguistic representation comes from the organization of language. Sentences are structures built out of basic units, words or morphemes. The meaning of the sentence -- what it represents -- depends on the meaning of those words together with its structure. So when we learn a language, we learn the words together with recipes for building sentences out of them. We thus acquire a representational system of great power and flexibility, for indefinitely many complex representations can be constructed out of its basic elements. Since mental representation exhibits these same properties, we might infer that it is organized in the same way. Thoughts consist of CONCEPTS assembled into more complex structures. A minimal language of thought hypothesis is the idea that our capacities to think depend on a representational system, in which complex representations are built from a stock of basic elements; the meaning of complex representations depend on their structure and the representational properties of those basic elements; and the basic elements reappear with the same meaning in many structures. This representational system is "Mentalese."
This minimal version of the language of thought hypothesis leaves many important questions open. (1) Just how "languagelike" is the language of thought? Linguists emphasize the complexity and abstractness of natural-language sentence structure. Our thoughts might be complex structures built out of simple elements without thought structures being as complex as those of natural language. Mentalese may have no equivalent of the elaborate MORPHOLOGY and PHONOLOGY of natural languages. Natural languages probably have features that reflect their history as spoken systems. If so, these are unlikely to be part of Mentalese. (2) The minimal hypothesis leaves open the nature of the basic units. Perhaps the stock of concepts is similar to the stock of simple words of a natural language. Just as there are words for tigers and trucks, there are concepts of them amongst the basic stock out of which thoughts are built. But the minimal version is also compatible with the idea that the basic units out of which complexes are built are nothing like the semantic equivalent of words. Thus the concept "tiger" might itself be a complex semantic structure. (3) The minimal version leaves open the relationship between Mentalese and natural languages. Perhaps only learning a natural language powers the development of Mentalese. Perhaps learning a natural language enhances and transforms the more rudimentary language of thought with which one begins. Perhaps learning a natural language is just learning to produce linguistic representations that are equivalent to those that can already be formulated in Mentalese.
Jerry Fodor goes beyond the minimal language of thought hypothesis. Fodor argues for Mentalese not just from intentional psychology but from cognitive psychology. He argues that our best accounts, and often our only accounts, of cognitive abilities presuppose the existence of a rich, language-like internal code. So, for example, any account of rational action presupposes that rational agents have a rich enough representational system to represent a range of possible actions and possible outcomes. They must have the capacity not just to represent actual states of the world but possible ones as well. Most importantly, learning in general, and concept acquisition in particular, depends on hypothesis formation in the inner code. You cannot learn the concept "leopard" or the word leopard unless you already have an inner code in which you can formulate an appropriate hypothesis about leopards. So Fodor thinks of Mentalese as semantically rich, with a large, word-like stock of basic units. For example, he expects the concepts "truck," "elephant," and even "reactor" to be semantically simple. Moreover, this large stock of basic units is innate. Experience is causally relevant to an agent's conceptual repertoire. But we do not learn our basic concepts from experience. Concept acquisition is more like the development of secondary sexual characteristics than like learning the dress code at the local pub. So Mentalese is independent of any natural language we speak. The expressive power of natural language depends on the expressive power of Mentalese, not vice versa.
The language of thought hypothesis has been enormously controversial. One response focuses on the inference from intentional psychology to the language of thought. For example, Daniel Dennett has long argued that the relationship between an agent's intentional profile -- the beliefs and desires she has -- and her internal states is likely to be very indirect. In a favorite illustration, he asks us to consider a chess-playing computer. These play good chess, so we treat them as knowing a lot about the game, as knowing, for example, that the passed-rook pawn is dangerous. We are right to do so, even though there is no single causally salient inner state that corresponds to that belief. Dennett thinks that the relationship between our beliefs and our causally efficacious inner states is likely to be equally indirect. I think this argument is best seen as a response to Fodor's strong version of the language of thought hypothesis. The same is true of many other critical responses, for these often focus on Fodor's denial that learning increases the expressive capacity of our thoughts. The Churchlands, for example, have taken this to be a reduction of the language of thought hypothesis itself, but if anything, it is a reduction only of Fodor's strong version of it. Connectionist models of cognition, on the other hand, do seem to be a threat to any version of a language of thought hypothesis, for in connectionist mental representation, meaning is not a function of the structure plus the meaning of the atomic units (see CONNECTIONISM, PHILOSOPHICAL ISSUES).
Churchland, P. S. (1986). Neurophilosophy. Cambridge, MA: MIT Press.
Dennett, D. C. (1987). True believers. In D. C. Dennett, The Intentional Stance. Cambridge, MA: MIT Press.
Fodor, J. A. (1975). The Language of Thought. Sussex: Harvester Press.
Fodor, J. A. (1981). The present status of the innateness controversy. In J. A. Fodor, Representations. Cambridge, MA: MIT Press.
Fodor, J. A., and Z. Pylyshyn. (1988). Connectionism and cognitive architecture: a critical analysis. Cognition 28:3-71.
Clark, A. (1989). Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing. Cambridge, MA: MIT Press.
Clark, A. (1993). Associative Engines: Connectionism, Concepts, and Representational Change. Cambridge, MA: MIT Press.
Fodor, J. A. (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: MIT Press.
Fodor, J. A. (1990). A Theory of Content and Other Essays. Cambridge, MA: MIT Press.
Harman, G. (1975). Language, thought, and communication. In K. Gunderson, Ed., Minnesota Studies in the Philosophy of Science. Vol. 7, Language, Mind, and Knowledge. Minneapolis: University of Minnesota Press.
Lower, B., and G. Rey, Eds. (1991). Jerry Fodor and His Critics. Oxford: Blackwell, ch.11-13.
Smolensky, P. (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences 11:1-84 .