Bounded Rationality

Bounded rationality is rationality as exhibited by decision makers of limited abilities. The ideal of RATIONAL DECISION MAKING formalized in RATIONAL CHOICE THEORY, UTILITY THEORY, and the FOUNDATIONS OF PROBABILITY requires choosing so as to maximize a measure of expected utility that reflects a complete and consistent preference order and probability measure over all possible contingencies. This requirement appears too strong to permit accurate description of the behavior of realistic individual agents studied in economics, psychology, and artificial intelligence. Because rationality notions pervade approaches to so many other issues, finding more accurate theories of bounded rationality constitutes a central problem of these fields. Prospects appear poor for finding a single "right" theory of bounded rationality due to the many different ways of weakening the ideal requirements, some formal impossibility and tradeoff theorems, and the rich variety of psychological types observable in people, each with different strengths and limitations in reasoning abilities. Russell and Norvig's 1995 textbook provides a comprehensive survey of the roles of rationality and bounded rationality notions in artificial intelligence. Cherniak 1986 provides a philosophical introduction to the subject. Simon 1982 discusses numerous topics in economics; see Conlisk 1996 for a broad economic survey.

Studies in ECONOMICS AND COGNITIVE SCIENCE and of human DECISION MAKING document cases in which everyday and expert decision makers do not live up to the rational ideal (Kahneman, Slovic, and TVERSKY 1982; Machina 1987). The ideal maximization of expected utility implies a comprehensiveness at odds with observed failures to consider alternatives outside those suggested by the current situation. The ideal probability and utility distributions imply a degree of LOGICAL OMNISCIENCE that conflicts with observed inconsistencies in beliefs and valuations and with the frequent need to invent rationalizations and preferences to cover formerly unconceived circumstances. The theory of BAYESIAN LEARNING or conditionalization, commonly taken as the theory of belief change or learning appropriate to rational agents, conflicts with observed difficulties in assimilating new information, especially the resistance to changing cognitive habits.

Reconciling the ideal theory with views of decision makers as performing computations also poses problems. Conducting the required optimizations at human rates using standard computational mechanisms, or indeed any physical system, seems impossible to some. The seemingly enormous information content of the required probability and utility distributions may make computational representations infeasible, even using BAYESIAN NETWORKS or other relatively efficient representations.

The search for realistic theories of rational behavior began by relaxing optimality requirements. Simon (1955) formulated the theory of "satisficing," in which decision makers seek only to find alternatives that are satisfactory in the sense of meeting some threshold or "aspiration level" of utility. A more general exploration of the idea of meeting specific conditions rather than unbounded optimizations also stimulated work on PROBLEM SOLVING, which replaces expected utility maximization with acting to satisfy sets of goals, each of which may be achieved or not. Simon (1976) also emphasized the distinction between "substantive" and "procedural" rationality, concerning, respectively, rationality of the result and of the process by which the result was obtained, setting procedural rationality as a more feasible aim than substantive rationality. Good (1952, 1971) urged a related distinction in which "Type 1" rationality consists of the ordinary ideal notion, and "Type 2" rationality consists of making ideal decisions taking into account the cost of deliberation. The Simon and Good distinctions informed work in artificial intelligence on control of reasoning (Dean 1991), including explicit deliberation about the conduct of reasoning (Doyle 1980), economic decisions about reasoning (Horvitz 1987, Russell 1991), and iterative approximation schemes or "anytime algorithms" (Horvitz 1987, Dean and Boddy 1988) in which optimization attempts are repeated with increasing amounts of time, so as to provide an informed estimate of the optimal choice no matter when deliberation is terminated. Although reasoning about the course of reasoning may appear problematic, it can be organized to avoid crippling circularities (see METAREASONING), and admits theoretical reductions to nonreflective reasoning (Lipman 1991). One may also relax optimality by adjusting the scope of optimization as well as the process. Savage (1972) observed the practical need to formulate decisions in terms of "small worlds" abstracting the key elements, thus removing the most detailed alternatives from optimizations. The related "selective rationality" of Leibenstein (1980) and "bounded optimality" of Horvitz (1987) and Russell and Subramanian (1995) treat limitations stemming from optimization over circumscribed sets of alternatives.

Lessening informational requirements constitutes one important form of procedural rationality. Goal-directed problem solving and small world formulations do this directly by basing actions on highly incomplete preferences and probabilities. The extreme incompleteness of information represented by these approaches can prevent effective action, however, thus requiring means for filling in critical gaps in reasonable ways, including various JUDGMENT HEURISTICS based on representativeness or other factors (Kahneman, Slovic, and TVERSKY 1982). Assessing the expected value of information forms one general approach to filling these gaps. In this approach, one estimates the change in utility of the decision that would stem from filling specific information gaps, and then acts to fill the gaps offering the largest expected gains. These assessments may be made of policies as well as of specific actions. Applied to policies about how to reason, such assessments form a basis for the nonmonotonic or default reasoning methods appearing in virtually all practical inference systems (formalized as various NONMONOTONIC LOGICS and theories of belief revision) that fill routine gaps in rational and plausible ways. Even when expected deliberative utility motivates use of a nonmonotonic rule for adopting or abandoning assumptions, such rules typically do not involve probabilistic or preferential information directly, though they admit natural interpretations as either statements of extremely high probability (infinitesimally close to 1), in effect licensing reasoning about magnitudes of probabilities without requiring quantitative comparisons, or as expressions of preferences over beliefs and other mental states of the agent, in effect treating reasoning as seeking mental states that are Pareto optimal with respect to the rules (Doyle 1994). Nonmonotonic reasoning methods also augment BAYESIAN LEARNING (conditionalization) with direct changes of mind that suggest "conservative" approaches to reasoning that work through incremental adaptation to small changes, an approach seemingly more suited to exhibiting procedural rationality than the full and direct incorporation of new information called for by standard conditionalization.

Formal analogs of Arrow's impossibility theorem for social choice problems and multiattribute UTILITY THEORY limit the procedural rationality of approaches based on piecemeal representations of probability and preference information (Doyle and Wellman 1991). As such representations dominate practicable approaches, one expects any automatic method for handling inconsistencies amidst the probability and preference information to misbehave in some situations.

See also

Additional links

-- Jon Doyle

References

Cherniak, C. (1986). Minimal Rationality. Cambridge, MA: MIT Press.

Conlisk, J. (1996). Why bounded rationality? Journal of Economic Literature 34:669-700.

Dean, T. (1991). Decision-theoretic control of inference for time-critical applications. International Journal of Intelligent Systems 6(4):417-441.

Dean, T., and M. Boddy. (1988). An analysis of time-dependent planning. Proceedings of the Seventh National Conference on Artificial Intelligence pp. 49-54.

Doyle, J. (1980). A model for deliberation, action, and introspection. Technical Report AI-TR 58. Cambridge, MA: MIT Artificial Intelligence Laboratory.

Doyle, J. (1994). Reasoned assumptions and rational psychology. Fundamenta Informaticae 20 (1-3): 35 - 73.

Doyle, J., and M. P. Wellman. (1991). Impediments to universal preference-based default theories. Artificial Intelligence 49 (1-3): 97 - 128.

Good, I. J. (1952). Rational decisions. Journal of the Royal Statistical Society B, 14:107-114.

Good, I. J. (1971). The probabilistic explication of information, evidence, surprise, causality, explanation, and utility. In V. P. Godambe and D. A. Sprott, Eds., Foundations of Statistical Inference. Toronto: Holt, Rinehart, and Winston, pp. 108-127.

Horvitz, E. J. (1987). Reasoning about beliefs and actions under computational resource constraints. Proceedings of the Third AAAI Workshop on Uncertainty in Artificial Intelligence pp. 429-444.

Kahneman, D., P. Slovic, and A. Tversky, Eds., (1982). Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

Leibenstein, H. (1980). Beyond Economic Man: A New Foundation for Microeconomics. 2nd ed. Cambridge, MA: Harvard University Press.

Lipman, B. L. (1991). How to decide how to decide how to. . . : modeling limited rationality. Econometrica 59(4):1105-1125.

Machina, M. J. (1987). Choice under uncertainty: problems solved and unsolved. Journal of Economic Perspectives 1(1):121-154.

Russell, S. J. (1991). Do the Right Thing: Studies in Limited Rationality. Cambridge, MA: MIT Press.

Russell, S. J., and P. Norvig. (1995). Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice-Hall.

Russell, S. J., and D. Subramanian. (1995). Provably bounded-optimal agents. Journal of Artificial Intelligence Research 2:575-609.

Savage, L. J. (1972). The Foundations of Statistics. Second edition. New York: Dover Publications.

Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics 69:99-118.

Simon, H. A. (1976). From substantive to procedural rationality. In S. J. Latsis, Ed., Method and Appraisal in Economics. Cambridge: Cambridge University Press, pp. 129-148.

Simon, H. A. (1982). Models of Bounded Rationality: Behavioral Economics and Business Organization, vol. 2. Cambridge, MA: MIT Press .