People sometimes need to know quantities that they can neither look up nor calculate. Those quantities might include the probability of rain, the size of a crowd, the future price of a stock, or the time needed to complete an assignment. One coping strategy is to use a heuristic, or rule of thumb, to produce an approximate answer. That answer might be used directly, as the best estimate of the desired quantity, or adjusted for suspected biases. Insofar as heuristics are, by definition, imperfect rules, it is essential to know how much confidence to place in them.
Heuristics are common practice in many domains. For example, skilled tradespeople have rules for bidding contracts, arbitrageurs have ones for making deals, and operations researchers have ones for predicting the behavior of complex processes. These rules may be more or less explicit; they may be computed on paper or in the head. The errors that they produce are the associated "bias."
Heuristics attained prominence in cognitive psychology through a series of seminal articles by Amos TVERSKY and Daniel Kahneman (1974), then at the Hebrew University of Jerusalem. They observed that judgments under conditions of uncertainty often call for heuristic solutions. The precise answers are unknown or inaccessible. People lack the training needed to compute appropriate estimates. Even those with training may not have the intuitions needed to apply their textbook learning outside of textbook situations.
The first of these articles (Tversky and Kahneman 1971) proposed that people expect future observations of uncertain processes to be much like past ones, even when they have few past observations to rely on. Such people might be said to apply a "law of small numbers," which captures some properties of the statistical "law of large numbers" but is insufficiently sensitive to sample size. The heuristic of expecting past observations to predict future ones is useful but leads to predictable problems, unless one happens to have a large sample. Tversky and Kahneman demonstrated these problems, with quantitative psychologists as subjects. For example, their scientist subjects overestimated the probability that small samples would affirm research hypotheses, leading them to propose study designs with surprisingly low statistical power. Of course, well-trained scientists can calculate the correct value for power analyses. However, to do so, they must realize that their heuristic judgment is faulty. Systematic reviews have found a high rate of published studies with low statistical power, suggesting that practicing scientists often lack this intuition (Cohen 1962).
Kahneman and Tversky (1972) subsequently subsumed this tendency under the more general representativeness heuristic. Users of this rule assess the likelihood of an event by how well it captures the salient properties of the process producing it. Although sometimes useful, this heuristic will produce biases whenever features that determine likelihood are insufficiently salient (or when irrelevant features capture people's attention). As a result, predicting the behavior of people relying on representativeness requires both a substantive understanding of how they judge salience and a normative understanding of what features really matter. Bias arises when the two are misaligned, or when people apply appropriate rules ineffectively.
Sample size is one normatively relevant feature that tends to be neglected. A second is the population frequency of a behavior, when making predictions for a specific individual. People feel that the observed properties of the individual (sometimes called "individuating" or "case-specific" information) need to be represented in future events, even when those observations are not that robust (e.g., small sample, unreliable source).
Bias can also arise when normatively relevant features are recognized but misunderstood. Thus, people know that random processes should show variability, but expect too much of it. One familiar expression is the "gambler's fallacy," leading people to expect, say, a "head" coin flip after four tails, but not after four alternations of head-tail. An engaging example is the unwarranted perception that basketball players have a "hot hand," caused by not realizing how often such (unrandom-looking) streaks arise by chance (Gilovich, Vallone, and Tversky 1985). In a sense, representativeness is a metaheuristic, a very general rule from which more specific ones are derived for particular situations. As a result, researchers need to predict how a heuristic will be used in order to generate testable predictions for people's judgments. Where those predictions fail, it may be that the heuristic was not used at all or that it was not used in that particular way.
Two other (meta)heuristics are availability and anchoring and adjustment. Reliance on availability means judging an event as likely to the extent that one can remember examples or imagine it happening. It can lead one astray when instances of an event are disproportionately (un)available in MEMORY. Reliance on anchoring and adjustment means estimating a quantity by thinking of why it might be larger or smaller than some initial value. Typically, people adjust too little, leaving them unduly "anchored" in that initial value, however arbitrarily it has been selected. Obviously, there are many ways in which examples can be produced, anchors selected, and adjustments made. The better these processes are understood, the sharper the predictions that can be made for heuristic-based judgments.
These seminal papers have produced a large research literature (Kahneman, Slovic, and Tversky 1982). Their influence can be traced to several converging factors (Dawes 1997; Jungermann 1983), including: (1) The initial demonstrations have proven quite robust, facilitating replications in new domains and the exploration of boundary conditions (e.g., Plous 1993). (2) The effects can be described in piquant ways, which present readers and investigators in a flattering light (able to catch others making mistakes). (3) The perspective fits the cognitive revolution's subtext of tracing human failures to unintended side effects of generally adaptive processes. (4) The heuristics operationalize Simon's (1957) notions of BOUNDED RATIONALITY in ways subject to experimental manipulation.
The heuristics-and-biases metaphor also provides an organizing theme for the broader literature on failures of human DECISION MAKING. For example, many studies have found people to be insensitive to the extent of their own knowledge (Keren 1991; Yates 1990). When this trend emerges as overconfidence, one contributor is the tendency to look for reasons supporting favored beliefs (Koriat, Lichtenstein, and Fischhoff 1980). Although that search is a sensible part of hypothesis testing, it can produce bias when done without a complementary sensitivity to disconfirming evidence (Fischhoff and Beyth-Marom 1983). Other studies have examined hindsight bias, the tendency to exaggerate the predictability of past events (or reported facts; Fischhoff 1975). One apparent source of that bias is automatically making sense of new information as it arrives. Such rapid updating should facilitate learning -- at the price of obscuring how much has been learned. Underestimating what one had to learn may mean underestimating what one still has to learn, thereby promoting overconfidence.
Scientists working within this tradition have, naturally, worried about the generality of these behavioral patterns. One central concern has been whether laboratory results extend to high-stakes decisions, especially ones with experts working on familiar tasks. Unfortunately, it is not that easy to provide significant positive stakes (or threaten significant losses) or to create appropriate tasks for experts. Those studies that have been conducted suggest that stakes alone do not eliminate bias nor lead people, even experts, to abandon faulty judgments (Camerer 1995).
In addition to experimental evidence, there are anecdotal reports and systematic observations of real-world expert performance showing biases that can be attributed to using heuristics (Gilovich 1991; Mowen 1993). For example, overconfidence has been observed in the confidence assessments of particle physicists, demographers, and economists (Henrion and Fischhoff 1986). A noteworthy exception is weather forecasters, whose assessments of the probability of precipitation are remarkably accurate (e.g., it rains 70 percent of the times that they forecast a 70 percent chance of rain; Murphy and Winkler 1992). These experts make many judgments under conditions conducive to LEARNING: prompt, unambiguous feedback that rewards them for accuracy (rather than, say, for bravado or hedging). Thus, these judgments may be a learnable cognitive skill. That process may involve using conventional heuristics more effectively or acquiring better ones.
Given the applied interests of decision-making researchers, other explorations of the boundary conditions on suboptimal performance have focused on practical procedures for reducing bias. Given the variety of biases and potential in-terventions, no simple summary can be comprehensive (Kahneman, Slovic, and Tversky 1982; von Winterfeldt and Edwards 1986). One general trend is that merely warning about bias is not very useful. Nor is teaching statistics, unless direct contact can be made with people's intuitions. Making such contact requires an understanding of natural thought processes and plausible alternative ones. As a result, the practical goal of debiasing has fostered interest in basic cognitive processes, in areas such as reasoning, memory, METACOGNITION, and PSYCHOPHYSICS (Nisbett 1993; Svenson 1996; Tversky and Koehler 1994). For example, reliance on availability depends on how people encode and retrieve experiences; any quantitative judgment may draw on general strategies for extracting hints at the right answer from the details of experimental (or real-world) settings (Poulton 1995).
Camerer, C. (1995). Individual decision making. In J. Kagel and A. Roth, Eds., The Handbook of Experimental Economics. Princeton, NJ: Princeton University Press.
Cohen, J. (1962). The statistical power of abnormal social psychological research. Journal of Abnormal and Social Psychology 65:145-153.
Dawes, R. M. (1997). Behavioral decision making, judgment, and inference. In D. Gilbert, S. Fiske, and G. Lindzey, Eds., The Handbook of Social Psychology. Boston, MA: McGraw-Hill, pp. 497-548.
Fischhoff, B. (1975). Hindsight | foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance 104:288-299.
Fischhoff, B., and R. Beyth-Marom. (1983). Hypothesis evaluation from a Bayesian perspective. Psychological Review 90:239-260.
Gilovich, T. (1991). How We Know What Isn't So. New York: Free Press.
Gilovich, T., R. Vallone, and A. Tversky. (1985). The hot hand in basketball: On the misperception of random sequences. Journal of Personality and Social Psychology 17:295-314.
Henrion, M., and B. Fischhoff. (1986). Assessing uncertainty in physical constants. American Journal of Physics 54:791-798.
Jungermann, H. (1983). The two camps on rationality. In R. W. Scholz, Ed., Decision Making Under Uncertainty. Amsterdam: Elsevier, pp. 63-86.
Kahneman, D., and A. Tversky. (1972). Subjective probability: A judgment of representativeness. Cognitive Psychology 3:430-454.
Kahneman, D., P. Slovic, and A. Tversky, Eds. (1982). Judgments Under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.
Keren, G. (1991). Calibration and probability judgment. Acta Psychologica 77:217-273.
Koriat, A., S. Lichtenstein, and B. Fischhoff. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Lear-ning and Memory 6:107-118.
Mowen, J. C. (1993). Judgment Calls. New York: Simon and Schuster.
Murphy, A. H., and R. L. Winkler. (1992). Approach verification of probability forecasts. International Journal of Forecasting 7:435-455.
Nisbett, R., Ed. (1993). Rules for Reasoning. Hillsdale, NJ: Erlbaum.
Plous, S. (1993). The Psychology of Judgment and Decision Making. New York: McGraw Hill.
Poulton, E. C. (1995). Behavioral Decision Making. Hillsdale, NJ: Erlbaum.
Simon, H. (1957). Models of Man: Social and Rational. New York: Wiley.
Svenson, O. (1996). Decision making and the search for fundamental psychological regularities. Organizational Behavior and Human Performance 65:252-267.
Tversky, A., and D. Kahneman. (1974). Judgment under uncertainty: Heuristics and biases. Science 185:1124-1131.
Tversky, A., and D. Kahneman. (1971). Belief in the "law of small numbers." Psychological Bulletin 76:105-110.
Tversky, A., and D. J. Koehler. (1994). Support theory. Psychological Review 101:547-567.
Yates, J. F. (1990). Judgement and Decision Making. New York: Wiley.
von Winterfeldt, D., and W. Edwards. (1986). Decision Making and Behavioral Research. New York: Cambridge University Press.
Baron, J. (1994). Thinking and Deciding. 2nd ed. New York: Cambridge University Press.
Bazerman, M., and M. Neale. (1992). Negotiating Rationally. New York: The Free Press.
Berkeley, D., and P. C. Humphreys. (1982). Structuring decision problems and the "bias heuristic." Acta Psychologica 50:201-252.
Dawes, R. (1988). Rational Choice in an Uncertain World. San Diego, CA: Harcourt Brace Jovanovich.
Hammond, K. R. (1996). Human Judgment and Social Policy. New York: Oxford University Press.
Morgan, M. G., and M. Henrion. (1990). Uncertainty. New York: Cambridge University Press.
Nisbett, R., and L. Ross. (1980). Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Cliffs, NJ: Prentice-Hall.
Thaler, R. H. (1992). The Winner's Curse: Paradoxes and Anomalies of Economic Life. New York: Free Press .