Probabilistic Reasoning

Probabilistic reasoning is the formation of probability judgments and of subjective beliefs about the likelihoods of outcomes and the frequencies of events. The judgments that people make are often about things that are only indirectly observable and only partly predictable. Whether it is the weather, a game of sports, a project at work, or a new marriage, our willingness to engage in an endeavor and the actions that we take depend on our estimated likelihood of the relevant outcomes. How likely is our team to win? How frequently have projects like this failed before? And what is likely to ameliorate those chances?

Like other areas of reasoning and decision making, the study of probabilistic reasoning lends itself to normative, descriptive, and prescriptive approaches. The normative approach to probabilistic reasoning is constrained by the same mathematical rules that govern the classical, set- theoretic conception of probability. In particular, probability judgments are said to be "coherent" if and only if they satisfy conditions commonly known as Kolmogorov's axioms: (1) No probabilities are negative. (2) The probability of a tautology is 1. (3) The probability of a disjunction of two logically exclusive statements equals the sum of their respective probabilities. And (4), the probability of a conjunction of two statements equals the probability of the first, assuming that the second is satisfied, times the probability of the second. Whereas the first three axioms involve unconditional probabilities, the fourth introduces conditional probabilities. When applied to hypotheses and data in inferential contexts, simple arithmatic manipulation of rule (4) leads to the result that the (posterior) probability of a hypothesis conditional on the data is equal to the probability of the data conditional on the hypothesis times the (prior) probability of the hypothesis, all divided by the probability of the data. Although mathematically trivial, this is of central importance in the context of so-called Bayesian inference, which underlies theories of belief updating and is considered by many to be a normative requirement of probabilistic reasoning (see BAYESIAN NETWORKS and INDUCTION).

There are at least two distinct philosophical conceptions of probability. According to one, probabilities refer to the relative frequencies of objective physical events in repeated trials; according to the other, probabilities are epistemic in nature, expressing degrees of belief in specific hypotheses. While these distinctions are beyond the scope of the current entry, they are related to ongoing debate concerning the status and interpretation of some experimental findings (see, e.g., Cosmides and Tooby 1996; Gigerenzer 1994, 1996; Kahneman and Tversky 1996). What is notable, however, is the fact that these different conceptions are arguably constrained by the same mathematical axioms above. Adherence to these axioms suffices to insure that probability judgment is coherent. Conversely, incoherent judgment entails the holding of contradictory beliefs and leaves the person open to possible "Dutch books." These consist of a set of probability judgments that, when translated into bets that the person deems fair, create a set of gambles that the person is bound to lose no matter how things turn out (Osherson 1995; Resnik 1987; see also DECISION MAKING).

Note that coherent judgment satisfies a number of logical, set-theoretic requirements. It does not insure that judgment is correct or even "well calibrated." Thus, a person whose judgment is coherent may nonetheless be quite foolish, believing, for example, that there is a great likelihood that he or she will soon be the king of France. Normative probabilistic judgment needs to be not only coherent but also well calibrated. Consider a set of propositions each of which a person judges to be true with a probability of .90. If she is right about 90 percent of these, then she is said to be well calibrated. If she is right about less or more than 90 percent, then she is said to be overconfident or underconfident, respectively .

A great deal of empirical work has documented systematic discrepancies between the normative requirements of probabilistic reasoning and the ways in which people reason about chance. In settings where the relevance of simple probabilistic rules is made transparent, subjects often reveal appropriate statistical intuitions. Thus, for example, when a sealed description is pulled at random out of an urn that is known to contain the descriptions of thirty lawyers and seventy engineers, people estimate the probability that the description belongs to a lawyer at .30. In richer contexts, however, people often rely on less formal considerations emanating from intuitive JUDGMENT HEURISTICS, and these can generate judgments that conflict with normative requirements. For example, when a randomly sampled description from the urn sounds like that of a lawyer, subjects' probability estimates typically rely too heavily on how representative the description is of a lawyer and too little on the (low) prior probability that it in fact belongs to a lawyer.

According to the representativeness heuristic, the likelihood that observation A belongs to class B is evaluated by the degree to which A resembles B. Sample sizes and prior odds, both of which are highly relevant to likelihood, do not impinge on how representative an observation appears and thus tend to be relatively neglected. In general, the notion that people focus on the strength of the evidence (e.g., the warmth of a letter of reference) with insufficient regard for its weight (e.g., how well the writer knows the candidate) can explain various systematic biases in probabilistic judgment (Griffin and Tversky 1992), including the failure to appreciate regression phenomena, and the fact that people are generally overconfident (when evidence is remarkable but its weight is low), and occasionally underconfident (when the evidence is unremarkable but its reliability high). Probability judgments based on the support, or strength of evidence, of focal relative to alternative hypotheses form part of a theory of subjective probability, called support theory. According to support theory, which has received substantial empirical validation, unpacking a description of an event into disjoint components generally increases its support and, hence, its perceived likelihood. As a result, different descriptions of the same event can give rise to different judgments (Rottenstreich and Tversky 1997; Tversky and Koehler 1994).

Probability judgments often rely on sets of attributes -- for example, a prospective applicant's exams scores, relevant experience, and letters of recommendation -- which need to be combined into a single rating, say, likelihood of success at a job. Because people have poor insight into how much weight to assign to each attribute, they are typically quite poor at combining attributes to yield a final judgment. Much research has been devoted to the tension between intuitive ("clinical") judgment and the greater predictive success obtained by linear models of the human judge (Meehl 1954; Dawes 1979; Dawes and Corrigan 1974; Hammond 1955). In fact, it has been repeatedly shown that a linear combination of attributes, based, for example, on a judge's past probability ratings, does better in predicting future (as well as previous) instances than the judge on whom these ratings are based. This bootstrapping method takes advantage of the person's insights captured across numerous ratings, and improves on any single rating where less than ideal weightings of attributes may intrude. Moreover, because attributes are often highly correlated and systematically misperceived, a unit assignment of weights, not properly devised for the person, can often still outperform the human judge (Dawes 1988).

While human intuition can be a useful guide to the likelihoods of events, it often exhibits instances of incoherence, in the sense defined above. Methods have been explored that extract from a person's judgments a coherent core that is maximally consistent with those judgments, and at the same time come closer to the observed likelihoods than do the original (incoherent) judgments (Osherson, Shafir, and Smith 1994; Pearl 1988). Probabilistic reasoning occurs in complex situations, with numerous variables and interactions influencing the likelihood of events. In these situations, people's judgments often violate basic normative rules. At the same time, people can exhibit sensitivity to and appreciation for the normative principles. The coexistence of fallible intuitions along with an underlying appreciation for normative judgment yield a subtle picture of probabilistic reasoning, and interesting possibilities for a prescriptive approach. In this vein, a large literature on expert systems has provided analyses and applications.

See also

Additional links

-- Eldar Shafir

References

Cosmides, L., and J. Tooby. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition 58:1-73.

Dawes, R. M. (1979). The robust beauty of improper linear models in decision making. American Psychologist 34:571-582.

Dawes, R. M., (1988). Rational Choice in an Uncertain World. New York: Harcourt Brace Jovanovich.

Dawes, R. M., and B. Corrigan. (1974). Linear models in decision making. Psychological Bulletin 81:97-106.

Gigerenzer, G. (1994). Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa). In G. Wright and P. Ayton, Eds., Subjective Probability. New York: Wiley.

Gigerenzer, G. (1996). On narrow norms and vague heuristics: A rebuttal to Kahneman and Tversky (1996). Psychological Review 103:592-596.

Griffin, D., and A. Tversky. (1992). The weighing of evidence and the determinants of confidence. Cognitive Psychology 24:411-435.

Hammond, K. R. (1955). Probabilistic functioning and the clinical method. Psychological Review 62:255-262.

Kahneman, D., and A. Tversky. (1996). On the reality of cognitive illusions. Psychological Review 103:582-591.

Meehl, P. E. (1954). Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. Minneapolis: University of Minnesota Press.

Osherson, D. N. (1995). Probability judgment. In E .E. Smith and D. N. Osherson, Eds., An Invitation to Cognitive Science. 2nd ed. Cambridge, MA: MIT Press.

Osherson, D. N., E. Shafir, and E. E. Smith. (1994). Extracting the coherent core of human probability judgment. Cognition 50:299-313.

Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Kaufmann.

Resnik, M. D. (1987). Choices: An Introduction to Decision Theory. Minneapolis: University of Minnesota Press.

Rottenstreich, Y., and A. Tversky. (1997). Unpacking, repacking, and anchoring: Advances in support theory. Psychological Review 104:406-415.

Tversky, A., and D. Kahneman. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review 90:293-315.

Tversky, A., and D. J. Koehler. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review 101:547-567.

Yates, J. F. (1990). Judgment and Decision Making. Englewood Cliffs, NJ: Prentice-Hall.

Further Readings

Arkes, H. R., and K. R. Hammond. (1986). Judgment and Decision Making: An Interdisciplinary Reader. Cambridge: Cambridge University Press.

Goldstein, W. M., and R. M. Hogarth. (1997). Research on Judgment and Decision Making: Currents, Connections and Controversies. Cambridge: Cambridge University Press.

Hacking, I. (1975). The Emergence of Probability. Cambridge: Cambridge University Press.

Heath, C., and A. Tversky. (1990). Preference and belief: Ambiguity and competence in choice under uncertainty. Journal of Risk and Uncertainty 4(1):5-28.

Howson, C., and P. Urbach. (1989). Scientific Reasoning: The Bayesian Approach. La Salle, IL: Open Court Publishers.

Kahneman, D., P. Slovic, and A. Tversky, Eds. (1982). Judgment under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.

Shafer, G., and J. Pearl, Eds. (1990). Readings in Uncertain Reasoning. San Mateo, CA: Kaufmann.

Skyrms, B. (1975). Choice and Chance 2nd ed. Belmont, CA: Dickensen.

von Winterfeld, D., and W. Edwards. (1986). Decision Analysis and Behavioral Research. Cambridge: Cambridge University Press.

Wu, G., and R. Gonzalez. (1996). Curvature of the probability weighting function. Management Science 42(12):1676-1690 .