Functionalism

Compare neurons and neutrons to planets and pendula. They all cluster into kinds or categories conforming to nomic generalizations and comporting with scientific investigation. However, whereas all neurons and neutrons must be composed of distinctive types of matter structured in ruthlessly precise ways, individual planets and pendula can be made of wildly disparate sorts of differently structured stuff. Neurons and neutrons are examples of physical kinds; planets and pendula exemplify functional kinds. Physical kinds are identified by their material composition, which in turn determines their conformity to the laws of nature. Functional kinds are not identified by their material composition but rather by their activities or tendencies. All planets, no matter the differences in their composition, orbit or tend to. All pendula, no matter the differences in their composition, oscillate or tend to.

What, then, of minds or mental states, kinds that process information and control intelligent activity or behavior? Do they define physical or functional kinds? Naturally occurring minds, at least those most familiar to us, are brains. The human mind is most certainly the human brain; the mammal mind, the mammal brain (Kak 1996). Hence, under the assumption that brains are physical kinds, we might conjecture that all minds must be brains and, therefore, physical kinds. If so, we should study the brain if curious about the mind.

However, perhaps we are misled by our familiar, local and possibly parochial sample of minds. If all the pendula at hand happened to be aluminum, we might, failing to imagine copper ones, mistakenly suppose that pendula must -- of necessity -- be aluminum. Maybe, then, we should ask whether it is possible that minds occur in structures other than brains. Might there be silicon Martians capable of reading Finnegan's Wake and solving differential equations? Such fabled creatures would have minds although, being silicon instead of carbon, they could not have brains. Moving away from fiction and closer toward fact, what should we make of artificially intelligent devices? They can be liberated from human biochemistry while exhibiting talents that appear to demand the kind of cognition that fuels much of what is psychologically distinctive in human activity.

Possibly, then, some minds are not brains. These minds might be made of virtually any sort of material as long as it should be so organized as to process information, control behavior and generally support the sort of performances indicative of minds. Minds would then be functional, not physical, kinds. Respectively like planets and pendula, minds might arise naturally or artificially. Their coalescing into a single unified kind would be determined by their proclivity to process information and to control behavior independently of the stuff in which individual minds might happen to reside. Terrestrial evolution may here have settled on brains as the local natural solution to the problem of evolving minds. Still, because differing local pressures and opportunities may induce evolution to offer up alternative solutions to the same problem (say, mammals versus marsupials), evolution could develop minds from radically divergent kinds of matter. Should craft follow suit, art might fabricate intelligence in any computational medium. Functionalism, then, is the thesis that minds are functional kinds (Putnam 1960; Armstrong 1968; Lewis 1972; Cummins 1983).

The significance of functionalism for the study of the mind is profound, for it liberates cognitive science from concern with how the mind is embodied or composed. Given functionalism, it may be true that every individual mind is itself a physical structure. Nevertheless, by the lights of functionalism, physical structure is utterly irrelevant to the deep nature of the mind. Consequently, functionalism is foundational to those cognitive sciences that would abstract from details of physical implementation in order to discern principles common to all possible cognizers, thinkers who need not share any physical features immediately relevant to thought. Such a research strategy befriends Artificial Intelligence inasmuch as it attends to algorithms, programs, and computation rather than cortex, ganglia, and neurotransmitters. True, the study of human or mammalian cognition might focus on the physical properties of the brain. But if functionalism is true, the most general features of cognition must be independent of neurology.

According to functionalism, a mind is a physical system or device -- with a host of possible internal states -- normally situated in an environment itself consisting of an array of possible external states. External states can induce changes in such a device's internal states, and fluctuations in these internal states can cause subsequent internal changes determining the device's overt behavior. Standard formulations of functionalism accommodate the mind's management of information by treating the internal, that is, cognitive, states of the device as its representations, symbols, or signs of its world (Dennett 1978; Fodor 1980; Dretske 1981). Hence, disciplined change in internal state amounts to change in representation or manipulation of information.

Some (Pylyshyn 1985), but not all (Lycan 1981), formulations of functionalism model the mind in terms of a TURING machine (Turing 1950), perhaps in the form of a classical digital computer. A Turing machine possesses a segmented tape with segments corresponding to a cognitive device's internal states or representations. The machine is designed to read from and write to segments of the tape according to rules that themselves are sensitive to how the tape may be antecedently marked. If the device is an information processor, the marks on the tape can be viewed as semantically disciplined symbols that resonate to the environment and induce the machine appropriately to respond (Haugeland 1981). For functionalism, then, the mind, like a computer, may process information and control behavior simply by implementing a Turing machine.

In allowing that minds are functional kinds, one supposes that mental state types (for example, believing, desiring, willing, hoping, feeling, and sensing) are themselves functionally characterized. Thus belief, as a type of mental state, would be a kind of mental state with characteristic causes and effects (Fodor 1975; Block and Fodor 1972). The idea can be extended to identify or individuate specific beliefs (Harman 1973; Field 1977). The belief, say, that snow is white might be identified by its unique causal position in the mental economy (see FUNCTIONAL ROLE SEMANTICS). On this model, specific mental states are aligned with the unobservable or theoretical states of science generally and identified by their peculiar potential causal relations.

Although functionalism has been, and remains, the dominant position in the philosophy of mind since at least 1970, it remains an unhappy hostage to several important objections. First, the argument above in favor of functionalism begins with a premise about how it is possible for the mind to be realized or implemented outside of the brain. This premise is dramatized by supposing, for example, that it is possible that carbon-free Martians have minds but lack brains. However, what justifies the crucial assumption of the real possibility of brainless, silicon Martian minds?

It is no answer to reply that anything imaginable is possible. For in that case, one can evidently imagine that it is necessary that minds are brains. If the imaginable is possible, it would follow that it is possible that it is necessary that minds are brains. However, on at least one version of modal logic it is axiomatic that whatever is possibly necessary is simply necessary. Hence, if it is possible that it is necessary that minds are brains, it is simply necessary that minds are brains. This, however, is in flat contradiction to the premise that launches functionalism, namely the premise that it is possible that minds are not brains! Evidently, what is desperately wanting here is a reasonable way of justifying premises about what is genuinely possible or what can be known to be possible. Until the functionalist can certify the possibility of a mind without a brain, the argument from such a possibility to the plausibility of functionalism appears disturbingly inconclusive (Maloney 1987).

Beyond this objection to the functionalist program is the worry that functionalism, if unwittingly in the service of a false psychology, could fly in the face of good scientific practice. To see this, suppose that minds are defined in terms of current (perhaps popular or folk) psychology and that this psychology turns out, unsurprisingly, to be false. In this case, minds -- as defined by a false theory -- would not be real, and that would be the deep and true reason why minds are not identical with real physical types such as brains. Nevertheless, a misguided functionalism, because it construes the mind as "whatever satisfies the principles of (false current) psychology," would wrongly bless the discontinuity of mind and brain and insist on the reality of mind disenfranchised from any physical kind. Put differently, our failure to identify phlogiston with any physical kind properly leads us to repudiate phlogiston rather than to elevate it to a functional kind. So too, the objection goes, perhaps our failure to identify the mind with a physical type should lead us to repudiate the mind rather than elevate it to a functional kind (Churchland 1981).

Others object to functionalism charging that it ignores the (presumed) centrality of CONSCIOUSNESS in cognition (Shoemaker 1975; Block 1978; Lewis 1980). They argue that functionally identical persons could differ in how they feel, that is, in their conscious, qualitative, or affective states. For example, you and I might be functionally isomorphic in the presence of a stimulus while we differ in our consciousness of it. You and I might both see the same apple and treat it much the same. Yet, this functional congruence might mask dramatic differences in our color QUALIA, differences that might have no behavioral or functional manifestation. If these conscious, qualitative differences differentiate our mental states, functionalism would seem unable to recognize them.

Finally, mental states are semantically significant representational states. As you play chess, you are thinking about the game. You realize that your knight is threatened but that its loss shall ensure the success of the trap you have set. But consider a computer programmed perfectly to emulate you at chess. It is your functional equivalent. Hence, according to functionalism it has the same mental states as do you. But does it think the same as you; does it realize, genuinely realize in exactly the manner that you do, that its knight is threatened but that the knight's loss ensures ultimate success? Or is the computer a semantically impoverished device designed merely to mimic you and your internal mental states without ever representing its world in anything like the manner in which you represent and recognize your world through your mental states (Searle 1980; Dennett and Searle 1982)? If you and the computer differ in how you represent the world, if you represent the world but the computer does not, then functionalism may have obscured a fundamentally important aspect of our cognition.

See also

Additional links

-- J. Christopher Maloney

References

Armstrong, D. (1968). A Materialist Theory of the Mind. London: Routledge and Kegan Paul.

Block, N. (1978). Troubles with functionalism. In C. W. Savage, Ed., Perception and Cognition: Issues in the Philosophy of Science. Minneapolis: University of Minnesota Press, 9:261-325.

Block, N., and J. Fodor. (1972). What psychological states are not. Philosophical Review 81:159-181.

Churchland, P. (1981). Eliminative materialism and the propositional attitudes. Journal of Philosophy LXXVIII: 67-90.

Cummins, R. (1983). The Nature of Psychological Explanation. Cambridge, MA: MIT Press/Bradford Books.

Dennett, D. (1978). Brainstorms. Montgomery, VT: Bradford Books.

Dennett, D., and J. Searle. (1982). The myth of the computer: an exchange. New York Review of Books June 24:56-57.

Dretske, F. I. (1981). Knowledge and the Flow of Information. Cambridge, MA: Bradford Books/MIT Press.

Field, H. (1977). Mental representations. Erkenntnis 13:9-16.

Fodor, J. (1975). The Language of Thought. New York: Thomas Crowell.

Fodor, J. (1980). Methodological solipsism considered as a research strategy in cognitive psychology. The Behavioral and Brain Sciences 3:63-109.

Harman, G. (1973). Thought. Princeton: Princeton University Press.

Haugeland, J. (1981). On the nature and plausibility of cognitivism. In J. Haugeland, Ed., Mind Design. Cambridge, MA: MIT Press/Bradford Books, pp. 243-281.

Kak, S. C. (1996). Can we define levels of artificial intelligence? Journal of Intelligent Systems 6:133-144.

Lewis, D. (1972). Psychophysical and theoretical identifications. Australasian Journal of Philosophy 50:249-258.

Lewis, D. (1980). Mad pain and martian pain. In N. Block, Ed., Readings in the Philosophy of Psychology, I. Cambridge, MA: MIT Press, pp. 216-222.

Lycan, W. (1981). Form, function and feel. Journal of Philosophy 78:24-50.

Maloney, J. C. (1987). The Mundane Matter of the Mental Language. Cambridge: Cambridge University Press.

Putnam, H. (1960). Minds and machines. In S. Hook, Ed., Dimensions of Mind. New York: N.Y.U. Press. Reprinted along with other relevant papers in Putnam's Mind, Language and Reality, Philosophical Papers 2. Cambridge: Cambridge University Press, 1975.

Pylyshyn, Z. (1985). Computation and Cognition: Toward a Foundation for Cognitive Science. Cambridge, MA: MIT Press/Bradford Books.

Searle, J. (1980). Minds, brains and computers. The Behavioral and Brain Sciences 3:417-457 (including peer review).

Shoemaker, S. (1975). Functionalism and qualia. Philosophical Studies 27:291-315.

Turing, A. (1950). Computing machinery and intelligence. Mind 59:433-460.