The theory of rational choice was developed within the discipline of economics by JOHN VON NEUMANN and Oskar Morgenstern (1947) and Leonard Savage (1954). Although its roots date back as far as Thomas Hobbes's denial that reason can fix our ends or desires (instrumental rationality), and David HUME's relegation of reason to the role of "slave of the passions," having no motivating force, via the utilitarians' definition of rationality as the maximization of "utility" and the neoclassical school of economics' theory of revealed preferences, rational choice theory (RCT) purports to be neutral relative to all forms of psychological assumptions or philosophies of mind. In this respect, its relevance for the cognitive sciences is problematic. However, its most recent developments have been marked by the discovery of paradoxes (Binmore 1987a, b; Campbell and Sowdon 1985; Eells 1982; Gauthier 1988/89; Gibbard and Harper 1985; Kavka 1983; Lewis 1985; McClennen 1989; Nozick 1969; Rosenthal 1982) whose interpretation and resolution call for the return of the repressed: an explicit psychology of DECISION MAKING and a full-blown theory of mind. No wonder more and more cognitive scientists today (philosophers, artificial intelligence specialists, psychologists) participate, along with economists and game theorists, in the debates about RCT.
It is ironic that Savage's expected utility theory, in which most economists see the perfect embodiment of instrumental rationality, is a set of axioms, admittedly purely syntactic in nature, that constrain the rational agent's ends for the sake of consistency (see RATIONAL DECISION MAKING). For instance, her preferences must be transitive: if she prefers x to y and y to z, she must prefer x to z. If, no matter the state of the world, she prefers x to y, she must prefer x to y even in the ignorance of the state of the world (sure-thing principle). Savage proves that an agent whose preferences satisfy all the axioms of the theory chooses as if she were maximizing her expected utility while assigning subjective probabilities to the states of the world. It is not at all that her choices can be explained by her setting out to maximize her utility, because it is tautological, by construction, that the utility of x is larger to her than that of y if she chooses x over y. The claim is that agents whose preferences were not consistent (i.e., violated the axioms) could not achieve the maximal satisfaction of their ends .
This removal of all psychological content and motivational assumptions from the theory of utility is untenable. Consider the obvious possibility that preferences may change over time. Which of one's preferences should be subjected to the coherence constraints set by the theory? Only the occurrent ones, because future preferences are not motivationally efficacious now? Should we rather postulate second-order preferences that weigh future versus occurrent first-order preferences? Or are there (noninstrumental) external reasons that will do the weighing? Dispensing with a theory of mind proves impossible (Hampton 1998; Hollis and Sugden 1993).
According to RCT, an act is an assignment of consequences to states of the world, and the description of a consequence must include no reference to how that consequence was brought about. The only legitimate motivations are forward-looking reasons: only the future matters. Using an equipment just because one has invested a lot in it is taken to be irrational (" sunk cost fallacy"; see Nozick 1993). Experiments in cognitive psychology reveal that most of us commit that alleged fallacy most of the time, proving that we care about the consistency between past and present, maybe for the sake of personal identity (we violate as well Savage's axioms, especially the sure-thing principle; see Shafir and Tversky 1993; cf. also JUDGMENT HEURISTICS). Does that mean that we are irrational, or just that our mind works differently from what RCT, in spite of its proclaimed neutrality, presupposes?
When RCT is applied to a strategic setting, leading to GAME THEORY, some of its implications are plainly paradoxical. In an ideal world where all agents are rational, this fact being common knowledge (everyone knows it, knows that everyone knows it, etc.), rational behavior may be quite unreasonable: the agents are unable to cooperate in a finitely repeated prisoner"s dilemma setting (Kreps and Wilson 1982); they don't make good on their promises when it goes against their current interest (assurance game; see Bratman 1992); their threats are not credible ( chain-store paradox; Selten 1978); trust proves impossible ( centipede game and backward induction paradox; Reny 1992; Pettit and Sugden 1989), etc. A remarkable feature is that a small departure from complete transparency is enough to bring back the rational close to the reasonable. Imperfect or BOUNDED RATIONALITY would be that which keeps the social world moving.
Philosophers have recently taken up these paradoxes. Although diverging, their conclusions make it clear that there is no way out without completing or amending RCT with theories of, among others, rational planning and intention-formation, belief revision, counterfactual and probabilistic reasoning in strategic settings, and even temporality and self-deception (Dupuy 1998). Some authors think it possible to ground a form of Kantian rationalism in such an expanded or revised RCT, so that to choose rationally entails that one chooses morally (Gauthier 1986).
Take as an example the assurance game. A mutually beneficial exchange is possible between you and me, but you have to take the first step and I will then decide whether I reciprocate or not. Is my proclaimed intention that I will reciprocate a good enough assurance for you to engage in the deal, and can I rationally form this intention? Forming it has positive autonomous effects for me, independent of my carrying it out (it will provide an incentive for you to cooperate), and no cost. If it were an act of the will, it would be rational for me to form it, and we might be tempted to conclude that it would also be rational to execute it. However, some authors contend, one cannot will oneself to form an intention any more than a belief, and it is impossible to form the intention to do X if one knows that when the time comes it will be irrational to do (Kavka 1983). Others maintain that it is possible to be "resolute" in this case, and rational not only to form the intention but to make good on it (McClennen 1989). Only a full-blown theory of the mind can adjudicate between these two positions.
Binmore, K. (1987a). Modeling rational players: part 1. Economics and Philosophy 3:9-55.
Binmore, K. (1987b). Modeling rational players: part 2. Economics and Philosophy 4:179-214.
Bratman, M. (1992). Planning and the stability of intention. Minds and Machines 2:1-16.
Campbell, R., and L. Sowden, Eds. (1985). Paradoxes of Rationality and Cooperation: Prisoner"s Dilemma and Newcomb's Problem. Vancouver: University of British Columbia Press.
Dupuy, J.-P. (1998). Rationality and self-deception. In J.-P. Dupuy, Ed., Self-Deception and Paradoxes of Rationality. Stanford: CSLI Publications, 113-150.
Eells, E. (1982). Rational Decision and Causality. Cambridge: Cambridge University Press.
Gauthier, D. (1986). Morals by Agreement. Oxford: Oxford University Press.
Gauthier, D. (1988/89). In the neighbourhood of the Newcomb-Predictor (Reflections on Rationality). Proceedings of the Aristotelian Society 89, part 3.
Gibbard, A., and W. Harper. (1985). Counterfactuals and two kinds of expected utility. In R. Campbell and L. Sowden, Eds., Paradoxes of Rationality and Cooperation: Prisoner's Dilemma and Newcomb's Problem. Vancouver: University of British Columbia Press. pp. 133-158. Originally published in Hooker, Leach, and McClennen, Eds. Foundations and Applications of Decision Theory vol. 1. Dordrecht: Reidel, 1978, pp. 125 - 162.
Hampton, J. (1997). The Authority of Reasons. Cambridge, MA: Cambridge University Press.
Hobbes, T. (1651). Leviathan. Cambridge: Cambridge University Press (1991).
Hollis, M., and Sugden, R. (1993). Rationality in action. Mind 102, 405:1-35.
Hume, D. (1740). A Treatise of Human Nature. Oxford: Oxford University Press (1978).
Kavka, G. (1983). The toxin puzzle. Analysis 43: 1.
Kreps, D. M., and R. Wilson. (1982). Reputation and imperfect information. Journal of Economic Theory 27:253-279.
Lewis, D. K. (1985). Prisoner"s dilemma is a Newcomb problem. In R. Campbell and L. Sowden, Eds., Paradoxes of Rationality and Cooperation: Prisoner's Dilemma and Newcomb's Problem. Vancouver: University of British Columbia Press, pp. 251-255. Originally published in Philosophy and Public Affairs 8(3): 235 - 240.
McClennen, E. (1989). Rationality and Dynamic Choice: Foundational Explorations. Cambridge: Cambridge University Press.
Nozick, R. (1969). Newcomb"s problem and two principles of choice. In N. Rescher, Ed., Essays in Honor of Carl G. Hempel. Dordrecht: Reidel, pp. 114-146.
Nozick, R. (1993). The Nature of Rationality. Princeton: Princeton University Press.
Pettit, P., and R. Sugden. (1989). The backward induction paradox. Journal of Philosophy 86:169-182.
Reny, P. J. (1992). Rationality in extensive-form games. Journal of Economic Perspectives 6:103-118.
Rosenthal, R. (1982). Games of perfect information, predatory pricing, and the chain store paradox. Journal of Economic Theory 25:92-100.
Savage, L. (1954). The Foundations of Statistics. New York: Wiley.
Selten, R. (1978). The chain store paradox. Theory and Decision 9:127-159.
Shafir, E., and A. Tversky. (1993). Thinking through uncertainty: Nonconsequential reasoning and choice. Cognitive Psychology.
Von Neumann, J., and O. Morgenstern. (1947). Theory of Games and Economic Behavior. 2nd ed. Princeton: Princeton University Press.
Bratman, M. (1987). Intentions, Plans and Practical Reason. Cambridge, MA: Harvard University Press.
Davidson, D. (1980). Essays on Actions and Events. Oxford: Clarendon Press.
Davidson, D. (1982). Paradoxes of irrationality. In R. Wollheim, and J. Hopkins, Eds., Philosophical Essays on Freud. Cambridge: Cambridge University Press.
Dupuy, J.-P. (1992). Two temporalities, two rationalities: A new look at Newcomb"s paradox. In P. Bourgine and B. Walliser, Eds., Economics and Cognitive Science. Pergamon Press.
Elster, J. (1979). Ulysses and the Sirens. Cambridge: Cambridge University Press.
Elster, J. (1986). The Multiple Self. Cambridge: Cambridge University Press.
Fischer, J. M., Ed. (1989). God, Foreknowledge, and Freedom. Stanford: Stanford University Press.
Frankfurt, H. (1971). Freedom of the will and the concept of a person. Journal of Philosophy 68:5-20.
Gauthier, D. (1984). Deterrence, maximization and rationality. In D. MacLean, Ed., The Security Gamble. Deterrence Dilemmas in the Nuclear Age. Totowa, NJ: Rowman and Allanheld.
Gauthier, D., and R. Sugden, Eds. (1993). Rationality, Justice and the Social Contract. Hemel Hempstead, England: Harvester Wheatsheaf.
Hollis, M. (1987). The Cunning of Reason. Cambridge: Cambridge University Press.
Horwich, P. (1987). Asymmetries in Time: Problems in the Philosophy of Science. Cambridge, MA: MIT Press.
Hurley, S. (1989). Natural Reasons. Oxford: Oxford University Press.
Lewis, D. K. (1969). Convention: A Philosophical Study. Cambridge, MA: Harvard University Press.
Lewis, D. K. (1979). Counterfactual dependence and time's arrow. Nous 13:455-476.
Luce, R. D., and H. Raiffa. (1957). Games and Decisions. New York: Wiley.
Parfit, D. (1984). Reasons and Persons. Oxford: Oxford University Press.
Quattrone, G. A., and A. Tversky. (1987). Self-deception and the voter"s illusion. In J. Elster, Ed., The Multiple Self. Cambridge: Cambridge University Press.
Schelling, T. C. (1960). The Strategy of Conflict. Cambridge, MA: Harvard University Press.
Simon, H. A. (1982). Models of Bounded Rationality. Cambridge, MA: MIT Press.
Sugden, R. (1991). Rational choice: A survey of contributions from economics and philosophy. Economic Journal 101:751-785.
Williams, B. (1981). Internal and external reasons. In Moral Luck. Cambridge: Cambridge University Press, pp. 101-113 .