Tversky, Amos

Amos Tversky (1937-1996) was a cognitive and mathematical psychologist who was passionately committed to advancing knowledge of human judgment and DECISION MAKING, and the similarities between them. Tversky's contributions to these subjects, put forward with a research style that combined rigorous mathematical analysis with elegant empirical demonstrations and simple examples of irresistible force and clarity, had a profound influence on scholars in numerous disciplines. Indeed, one measure of Tversky's impact is how much his ideas have generated excitement and altered curricula in such varied fields as psychology, economics, law, medicine, political science, philosophy, and statistics.

Much of Tversky's research demonstrated that the thought processes governing people's judgments and choices are not as thorough or rigorous as people would like to believe, or that certain formal theories would have them believe. With his frequent collaborator Daniel Kahneman, Tversky identified a number of JUDGMENT HEURISTICS, or "rules of thumb," that people use to guide their judgments in numerous domains. Each heuristic consists of some "natural assessment," such as similarity, ease of retrieval from memory, or CAUSAL REASONING, that is coopted to tackle a difficult judgmental problem that people lack the cognitive mechanisms to solve readily with precision. Tversky and Kahneman likened heuristics to perceptual cues used to apprehend the world: both generally serve the individual well, but both can give rise to systematic error. The clarity of an object, for example, is one cue to its distance. The cue is generally helpful, because the closer something is, the more distinct it appears. On a hazy day, however, objects seem further away than they really are. Thus, the source of general accuracy, clarity, is also the very cause of predictable error.

Tversky and Kahneman identified several heuristics, such as availability, representativeness, and anchoring-and-adjustment. Most of their research, however, has focused on the first two. People employ the availability heuristic when they judge the size of a category or the probability of an event by the ease with which relevant instances can be brought to mind. The heuristic often yields accurate judgments: all else being equal, it is easier to think of examples from large categories than small ones. But extraneous factors can make examples from certain categories disproportionately easy (or difficult) to recall and thus render ease of generation misleading. Most people, for example, mistakenly assume that there are more English words that begin with "r" than words that have "r" in the third position. Availability is a poor guide in this context because it is so much easier to "find" words that begin with a particular letter than it is to find those that have the target letter anywhere else. The availability heuristic has been tied to people's distorted estimates of the likelihood of various health risks, and to the fact that collaborators often claim more credit for the success of a joint venture than there is credit to go around.

The representativeness heuristic consists of reducing complex judgments to relatively simple similarity assessments. It reflects people's unstated assumptions that outcomes typically resemble the processes that generated them, effects typically resemble their causes, and category members typically resemble their category prototypes. There is doubtless some truth to these assumptions (and hence some legitimacy to the use of representativeness), but just how much is impossible to say. What is clear is that these assumptions do not always hold, and where they break down can be found some telling judgmental errors. Tversky and Kahneman have shown, for example, that the representativeness heuristic plays a role in the "gambler's fallacy," in people's insensitivity to regression to the mean, in a misplaced faith in results based on small samples, and in the underutilization of "base rate" information or the prior probability of events.

The main thrust of Tversky and Kahneman's work on judgment -- that people have cognitive capacity limitations and so must simplify some of the complex problems they confront -- is fundamentally inconsistent with at least one widely touted model of human behavior, namely, the rational actor of economic theory. Economists contend that people are highly rational utility maximizers who compute any action's likely effect on their total wealth, and choose accordingly (see ECONOMICS AND COGNITIVE SCIENCE).

Tversky and Kahneman argued that people's choices -- economic or otherwise -- are often a good deal simpler. People typically do not monitor a prospect's likely effect on their final asset position. Rather, they pay attention to whether a given course of action might result in a gain or loss from the status quo (or some other salient reference point), and they are highly sensitive to how choices are presented or "framed." Tversky and Kahneman provided an account of these and other deviations from the standard normative model of expected UTILITY THEORY in a descriptive theory of decision making known as prospect theory.

Prospect theory captures several important elements of how people make decisions. One is the asymmetry between gains and losses. A loss of a given size generates more pain than an equally large gain yields pleasure. This asymmetry is what underlies many of the most powerful framing effects. Courses of action can sometimes be described either in the language of gains or losses, thereby invoking very different processes and producing markedly different decisions. For example, consumers are less likely to use credit cards if the difference between the cash and credit price is described as a "credit card surcharge" (a loss) rather than a "cash discount" (a foregone gain).

Prospect theory and the work it has inspired has driven a wedge between prescriptive and descriptive theories of choice. No single theory can be both prescriptively valid and descriptively accurate because the axioms of rational choice are normatively beyond dispute (see RATIONAL CHOICE THEORY), and yet the violations of these axioms are too reliable to be dismissed.

Among the topics that occupied Tversky's attention during the last years of his life was "support theory," a model of subjective probability based on the insight that people's probability judgments derive from descriptions of events, not from the events themselves. Support theory sheds light on a number of important phenomena, such as subadditivity, or the fact that the judged probability of an event increases when it is described in terms of its constituent elements. People judge it more likely, for example, that someone would die of "heart disease, cancer, or some other natural cause" than simply of "natural causes."

Nearly all of Tversky's research touched on the enduring question of the relative contribution of the heart and the mind to human irrationality. Cognitive scientists tend to approach this issue by emphasizing the mind and trying to determine how much of the fallibility of human reason can be explained in purely cognitive terms. Although Tversky would hardly deny that people's wants and passions often get the better of them (indeed, he did influential research on that topic too), much of his work demonstrates that many of our most egregious, most interesting, and most predictable mistakes are indeed entirely cognitive. His research makes it clear that many of our erroneous judgments and problematic decisions are the product, in his words, of "illusions, not delusions."

Additional links

-- Thomas Gilovich


Bell, D. E., H. Raiffa, and A. Tversky, Eds. (1988). Decision Making: Descriptive, Normative, and Prescriptive Interactions. New York: Cambridge University Press.

Gilovich, T., R. P. Vallone, and A. Tversky. (1985). The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology 17:295-314.

Griffin, D., and A. Tversky. (1992). The weighing of evidence and the determinants of confidence. Cognitive Psychology 24:411-435.

Kahneman, K., and A. Tversky. (1972). Subjective probability: A judgment of representativeness. Cognitive Psychology 3:430-454.

Kahneman, K., and A. Tversky. (1973). On the psychology of prediction. Psychological Review 80:237-251.

Kahneman, K., and A. Tversky. (1979). Prospect theory: An analysis of decision under risk. Econometrica 47:263-291.

Kahneman, D., and A. Tversky. (1996). On the reality of cognitive illusions. Psychological Review 103:582-591.

Kahneman, D., P. Slovic, and A. Tversky, Eds. (1982). Judgment under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.

Tversky, A. (1969). The intransitivity of preferences. Psychological Review 76:31-48.

Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review 79:281-299.

Tversky, A. (1977). Features of similarity. Psychological Review 84:327-352.

Tversky, A., and D. Kahneman. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology 5:207-232.

Tversky, A., and D. Kahneman. (1974). Judgment under uncertainty: Heuristics and biases. Science 185:1124-1131.

Tversky, A., and D. Kahneman. (1981). The framing of decisions and the psychology of choice. Science 211:453-458.

Tversky, A., and D. Kahneman. (1983). Extensional vs. intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review 91:293-315.

Tversky, A., and D. Kahneman. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty 5:297-323.

Tversky, A., and D. J. Koehler. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review 101:547-567.

Tversky, A., and S. Sattath. (1979). Preference trees. Psychological Review 86:542-573.

Tversky, A., S. Sattath, and P. Slovic. (1988). Contingent weighting in judgment and choice. Psychological Review 95:371-384.