Rational Agency

In philosophy of mind, rationality is conceived of as a coherence requirement on personal identity: roughly, "No rationality, no agent." The agent must have a means-ends competence to fit its actions or decisions, according to its beliefs or knowledge-representation, to its desires or goal-structure. That agents possess such rationality is more than an empirical hypothesis; for instance, as a putative set of beliefs, desires, and decisions accumulated inconsistencies, the set would cease even to qualify as containing beliefs, etc., and disintegrate into a mere set of sentences. This agent-constitutive rationality is distinguished from more stringent normative rationality standards, for agents can and often do possess cognitive systems that fall short of epistemic uncriticizability (e.g., with respect to perfect consistency) without thereby ceasing to constitute agents.

Standard philosophical conceptions of rationality derive from models of the rational agent in microeconomic, game, and decision theory earlier this century (e.g., Von Neumann and Morgenstern 1944; Hempel 1965). The underlying idealization is that the agent, given its belief-desire system, optimizes its choices. While this optimization model was proposed as either a normative standard or an empirically predictive account (or both), the philosophical model concerns the idea that we cannot even make sense of agents that depart from such optimality. Related ideal-agent concepts can be discerned in principles of charity for RADICAL INTERPRETATION of human behavior of W. V. Quine (1960) and of Donald Davidson (1980), and in standard epistemic logic (Hintikka 1962). To accomplish this perfection of appropriate decisions in turn would require vast inferential insight: for example, the ideal agent must possess a deductive competence that includes a capacity to identify and eliminate any and all inconsistencies arising in its cognitive system.

While such LOGICAL OMNISCIENCE might appropriately characterize a deity, prima facie it seems at odds with the most basic law of human psychology, that we are finite entities. A wide range of experimental studies since the 1970s indicate interesting and persistent patterns of our departures from ideal logician (Tversky and Kahneman 1974), for instance in harboring inconsistent preferences. A more extreme departure from reality is that for such ideal agents, major portions of the deductive sciences would be trivial (e.g., the role of the discovery of the semantic and set-theoretic paradoxes in the development of logic in this century would then cease even to be intelligible). For a COMPUTATIONAL THEORY OF MIND, where the agent's deductive competence must be represented as a finite algorithm, the ideal agent would in fact have to violate Church's undecidability theorem for first-order logic (Cherniak 1986).

The agent-idealizations -- within the limits of their applicability -- of course have served very successfully as simplified approximations in economic, game, and decision theory. Nonetheless, a sense of their psychological unreality has motivated two types of subsequent theorizing. One type reinforces an eliminativist impulse, that the whole framework of intentional psychology -- with rationality at its core -- ought to be cleared away as prescientific pseudotheory (see ELIMINATIVE MATERIALISM); a related response is a quasi-eliminativist instrumentalism (e.g., Dennett 1978), where the agent's cognitive system and its rationality diminish to no more than convenient (but impossible) fictions of the theoretician that may help in predicting agent behavior, but cannot be psychologically real. Ultimately, a sense of the unreality of ideal agent models can spur doubts about the very possibility of a cognitive science.

The other type of response to troubles with the idealizations is a via media strategy. After recognizing that nothing could count as an agent or person that satisfied no rationality constraints, one stops to wonder whether one must jump to a conclusion that the agent has to be ideally rational. Is rationality all or nothing, or is there some golden mean between unattainable, perfect unity of mind and utter, chaotic disintegration of personhood? The normative and empirical rationality models of Simon (1982) are among the earliest of this less stringent sort: the central principle is that, rather than optimizing or maximizing, the agent only "satisfices" its expected utility, choosing decisions that are good enough according to its belief-desire set, rather than perfect. Such modest coherence realistically is all that an agent ought to attempt, and all that can in general be expected. What amounts to a corresponding account for agent-constitutive rationality appears in Cherniak (1981), with a requirement of minimal, rather than ideal, charity on making sense of an agent's actions. An even more latitudinarian conception can be found in Stich (1990). Related limited-resource models are now also employed in artificial intelligence (see BOUNDED RATIONALITY).

Moderate rationality conceptions leave room for the above-mentioned widely observed phenomena of suboptimal human reasoning, rather than excluding them as unintelligible behavior. We are, after all, only human. Indeed, these more psychologically realistic models can explain the departures from correctness as symptoms of our having to use more efficient but formally imperfect "quick but dirty" heuristic procedures. Formally correct and complete inference procedures are typically computationally complex, with surprisingly small-sized problem instances sometimes requiring vastly unfeasible time and memory resources. (To an extent, this practical intractability parallels, and extends, classical absolute unsolvability; see GÖDEL'S THEOREMS.) Antinomies like Russell's paradox lurking at the core of our conceptual scheme can then be interpreted similarly as signs of our having to use heuristic procedures to avoid computational paralysis.

To conclude, some vigilance about unwarranted reification of cognitive architecture remains advisable. Just as attention has turned to evaluation of uncritical idealizing, scope continues for scrutiny of tacit assumptions in rationality models about psychologically realistic representational format (if any) -- for example, the discussions reviewed above tend to presuppose agents as sentence-processors, rather than as, say, quasi-picture processors. Finally, the familiar uneasy coexistence of the intentional framework -- having rationality at its core -- with the scientific worldview is worth recalling. Yet probably much of the groundplan of our species' model of an agent is innate (see AUTISM and THEORY OF MIND); the framework therefore may be a ladder we cannot kick away. It is as if the scientific worldview can comfortably proceed neither with, nor without, an intentional-cognitive paradigm.

See also

Additional links

-- Christopher Cherniak

References

Cherniak, C. (1981). Minimal rationality. Mind 90:161-183.

Cherniak, C. (1986). Minimal Rationality. Cambridge, MA: MIT Press.

Davidson, D. (1980). Psychology as philosophy. In D. Davidson, Essays on Actions and Events. New York: Oxford University Press.

Dennett, D. (1978). Intentional systems. In Brainstorms. Cambridge, MA: MIT Press.

Hempel, C. (1965). Aspects of scientific explanation. In Aspects of Scientific Explanation. New York: Free Press.

Hintikka, J. (1962). Knowledge and Belief. Ithaca, NY: Cornell University Press.

Quine, W. (1960). Word and Object. Cambridge, MA: MIT Press.

Simon, H. (1982). Models of Bounded Rationality, vol. 2. Cambridge, MA: MIT Press.

Stich, S. (1990). The Fragmentation of Reason. Cambridge, MA: MIT Press.

Tversky, A., and D. Kahneman. (1974). Judgment under uncertainty: Heuristics and biases. Science 185:1124-1131.

Von Neumann, J., and O. Morgenstern. (1944). Theory of Games and Economic Behavior. New York: Wiley