Case-based reasoning (CBR) refers to a style of designing a system so that thought and action in a given situation are guided by a single distinctive prior case (precedent, prototype, exemplar, or episode). Historically and philosophically, CBR exists as a reaction to rule-based reasoning: in CBR, the emphasis is on the case, not the rule.
CBR works with a set of past cases, a case base. CBR seeks to determine a "source case" relevant to a given "target case". All CBR systems separate their reasoning into two stages: (1) finding the appropriate source case (retrieving); and (2) determining the appropriate conclusions in the target case (revising/reusing). All CBR systems must have some way of augmenting their case base or learning new cases, even if this simply involves appending to a list of stored cases. Retrieval is described as finding the "most similar" past case or the "nearest neighbor"; this just begs the question of what is the appropriate similarity or distance metric.
To reason about an Italian automobile, consider past examples of automobiles. If the most similar retrieved case is a European automobile, this is a better source of information than a past example of an American automobile, all things being equal.
A set of cases might be viewed as a corpus from which rules could potentially be gleaned, but the appropriate generalizations of which have not yet been performed. In this view, CBR is a postponement of INDUCTION. The advantage of the raw cases, in this view, is that revision of the rule base can better be performed because the original cases remain available; they have not been discarded in favor of the rules that summarize them.
To guide deliberation in a situation, a case-based reasoner represents and transforms the rationale of a precedent or the etiology of a prior case. By hypothesis, a single case suffices for guidance if it is the appropriate case and it is transformed properly. In contrast, rule-based reasoners (e.g., EXPERT SYSTEMS and DEDUCTIVE REASONING) apply a rule to a situation with no transformation.
In both rule-based and case-based reasoning, managing the interaction of multiple sources of guidance is crucial. In CBR, different cases can suggest conflicting conclusions; in rule-based reasoning, several rules might conflict. In one, choose a case; in the other, choose a rule. Nonmonotonic reasoning is fundamentally concerned with both kinds of choice.
In practice, the separation of CBR from other forms of reasoning is imperfect. An interplay of rules and cases is unavoidable. A case can almost always be viewed as a compact representation of a set of rules. CBR is just one form of extensional programming (other examples are PATTERN RECOGNITION AND FEEDFORWARD NETWORKS, MACHINE LEARNING, and statistical learning) though CBR performs its generalizations on-line, while others preprocess their generalizations.
Nevertheless, CBR is a distinctly different paradigm. The emphasis is on the unique properties of each case, not the statistical properties of numerous cases. CBR differs from induction because induction derives its power from the aggregation of cases, from the attempt to represent what tends to make one case like or unlike another. CBR derives its power from the attempt to represent what suffices to make one case like or unlike another. CBR emphasizes the structural aspects of theory formation, not the statistical aspects of data.
Case-based reasoning is usually associated with work that has been called "scruffy": work that aims at the design of software systems and that takes its main themes and inspiration from psychology (e.g., Rosch and Lloyd 1978). Roger Schank (1982) imposes the view that case-based reasoning mimics human MEMORY. He refers to cases as "memories," retrieval of cases as "remindings," and representation of cases as "memory organization." Systems that owe their origin to this school of thought are considerable in scope and ability. There are case-based planners, case-based problem-solvers, case-based diagnosticians, case-based financial consultants, case-based approaches to learning (including EXPLANATION-BASED LEARNING), and case-based illuminations of search.
Research in this style tends to taxonomize issues and approaches. There are qualitative and quantitative metrics of similarity; there are approaches that seek to understand the causality underlying a case, and approaches that do not. The literature contains both a rich conceptual cartography and some of the most accessible polemics on the importance of a nonstatistical approach to the logging of past cases.
A good example of a case-based system is Katia Sycara's PERSUADER, which reasons about labor-management negotiations. It uses past agreements between similarly situated parties to suggest proposals that might succeed in the current negotiation. Past and present situations are compared on features such as wage rates and market competitiveness, and include structural models of how changing one feature affects another. Past agreements can be transformed according to the differences between past and present, possibly including numerically scaling the size of settlements.
Case-based reasoning moved to the center of AI when the logical issues of postponing rule formation were separated from the psychological issues of stuctural ANALOGY. Stuart Russell (1989) defined precisely the "logical problem of analogy" so that it could be studied with precision in the philosophical tradition. Russell proposed that there were relations between predicates, called "determinations," that would permit a single co-occurrence of P and Q to lead to the rule "if P(x) then Q(x)." Thus, a person's nationality determines that person's language, but does not determine marital status. The logical formulation showed clearly what would be needed to formally justify analogy. Analogy is either presumptuous (thus, fallible, defeasible, or otherwise susceptible to discount and revision), or else it brings knowledge to bear that permits a single case to skew an entire (statistical) reference class. Like Nelson Goodman's (1972) paradox of "grue," which raises the question of justified projection with many cases, "determinations" raise the question of justified projection from a single case.
Kevin Ashley (1990) showed how cases are used in legal reasoning. Cases and precedents are fundamental to philosophy of law; AI and law have been equally concerned with the proper modeling of cases. Ashley noted that some features describing cases are inherently proplaintiff or pro - defendant. Understanding this distinction permits deeper comparisons of similarity. CBR appears in moral and legal philosophy under the name "casuistry."
Earlier researchers defended CBR by citing contemporary psychology. Ashley and Russell connected CBR to immense literatures that were historically concerned with the significance of the single case. Current work on CBR continues to revolve around these two foci: psychology-inspired themes for systems design, and the precise understanding of reasoning with the single case.
Ashley, K. (1990). Modeling Legal Arguments: Reasoning with Cases and Hypotheticals. Cambridge, MA: MIT Press.
Goodman, N. (1972). Problems and Projects. Indianapolis: Bobbs-Merrill.
Rosch, E., and B. Lloyd, Eds. (1978). Cognition and Categorization. Erlbaum.
Russell, S. (1989). The Use of Knowledge in Analogy and Induction. London: Pitman.
Schank, R. (1982). Dynamic Memory: A Theory of Reminding and Learning in Computers and People. Cambridge: Cambridge University Press.
Berman, D., and C. Hafner. (1991). Incorporating procedural context into a model of case-based legal reasoning. Proc. Intl. Conf. on AI and Law Oxford.
Berman, D., and C. Hafner. (1993). Representing teleological concepts in case-based legal reasoning: the missing link. Proc. Intl. Conf. on AI and Law Oxford.
Branting, K. (1991). Explanations with rules and structured cases. International Journal of Man-Machine Studies 34.
Burstein, M. (1985). Learning by Reasoning from Multiple Analogies. Ph.D. diss., Yale University.
Carbonell, J. (1981). A computational model of analogical problem-solving. Proc. IJCAI Vancouver.
Cohen, M., and E. Nagel. (1934). An Introduction to Logic and Scientific Method. Harcourt Brace.
Davies, T., and S. Russell. (1987). A logical approach to reasoning by analogy. Proc. IJCAI Milan.
DeJong, G. (1981). Generalizations based on explanations. Proc. IJCAI Vancouver.
Gardner, A. (1987). An AI Approach to Legal Reasoning. Cambridge, MA: MIT Press.
Gentner, D. (1983). Structure mapping: a theoretical framework for analogy. Cognitive Science 7.
Hammond, K. (1990). Case-based planning: a framework for planning from experience. Cognitive Science 14.
Hesse, M. (1966). Models and Analogies in Science. University of Notre Dame Press.
Keynes, J. (1957 [1908]). A Treatise on Probability. MacMillan.
Kolodner, J. (1993). Case-Based Reasoning. Morgan Kaufman.
Koton, P. (1988). Using Experience in Learning and Problem Solving. Ph.D. diss., MIT.
Leake, D., Ed. (1996). Case-Based Reasoning: Experiences, Lessons, and Future Directions. Cambridge, MA: MIT Press.
Loui, R. (1989). Analogical reasoning, defeasible reasoning, and the reference class. Proc. Knowledge Representation and Reasoning Toronto.
Loui, R., and J. Norman. (1995). Rationales and argument moves. Artificial Intelligence and Law 3.
Mitchell, T., R. Keller, and S. Kedar-Cabelli. (1986). Explanation-based generalization: a unifying view. Machine Learning Journal 1.
Raz, J. (1979). The Authority of Law. Oxford.
Riesbeck, C., and R. Schank. (1989). Inside Case-Based Reasoning. Erlbaum.
Skalak, D., and E. Rissland. (1992). Arguments and cases: an inevitable intertwining. Artificial Intelligence and Law 1.
Sunstein, C. (1996). Legal Reasoning and Political Conflict. Oxford.
Winston, P. (1980). Learning and reasoning by analogy. Comm. ACM 23 .