An explanation is a structure or process that provides understanding. Furnishing explanations is one of the most important activities in high-level cognition, and the nature of explanation and its role in thinking has been addressed by philosophers, psychologists, and artificial intelligence researchers.
The main philosophical concern has been to characterize the nature of explanations in science. In 1948, Hempel and Oppenheim proposed the deductive-nomological (D-N) model of explanation, according to which an explanation is an argument that deduces a description of a fact to be explained from general laws and descriptions of observed facts (Hempel 1965). For example, to explain an eclipse of the sun, scientists use laws of planetary motion to deduce that at a particular time the moon will pass between the earth and the sun, producing an eclipse. Many artificial intelligence researchers also assume that explanations consist of deductive proofs (e.g., Mitchell, Keller, and Kedar-Cabelli 1986).
Although the D-N model gives a good approximate account of explanation in some areas of science, particularly mathematical physics, it does not provide an adequate general account of explanation. Some explanations are inductive and statistical rather than deductive, showing only that an event to be explained is likely or falls under some probabilistic law rather than that it follows deductively from laws (see DEDUCTIVE REASONING and INDUCTION). For example, we explain why people get influenza in terms of their exposure to the influenza virus, but many people exposed to the virus do not get sick. In areas of science such as evolutionary biology, scientists cannot predict how different species will evolve, but they can use the theory of evolution by natural selection and the fossil record to explain how a given species has evolved. Often, the main concern of explanation is not so much to deduce what is to be explained from general laws as it is to display causes (Salmon 1984). Understanding an event or class of events then consists of describing the relevant causes and causal mechanisms that produce such events. Salmon (1989) and Kitcher (1989) provide good reviews of philosophical discussions of the nature of scientific explanation. According to Friedman (1974) and Kitcher, explanations yield understanding by unifying facts using common patterns.
Deduction from laws is just one of the ways that facts can be explained by fitting them into a more general, unifying framework. More generally, explanation is a process of applying a schema that fits what is to be explained into a system of information. An explanation schema consists of an explanation target, which is a question to be answered, and an explanatory pattern, which provides a general way of answering the question. For example, when you want to explain why a person is doing an action such as working long hours, you may employ a rough explanation schema like:
To apply this schema to a particular case, we replace the italicized terms with specific examples, as in explaining Mary's action of working long hours in terms of her belief that this will help her to fulfill her desire to finish her Ph.D. dissertation. Many writers in philosophy of science and cognitive science have described explanations and theories in terms of schemas, patterns, or similar abstractions (Kitcher 1989, 1993; Kelley 1972; Leake 1992; Schank 1986; Thagard 1992).
One kind of explanation pattern that is common in biology, psychology, and sociology explains the presence of a structure or behavior in a system by reference to how the structure or behavior contributes to the goals of the system. For example, people have hearts because this organ functions to pump blood through the body, and democracies conduct elections in order to allow people to choose their leaders. These functional (teleological) explanations are not incompatible with causal/mechanical ones: in a biological organism, for example, the explanation of an organ in terms of its function goes hand in hand with a causal explanation that the organ developed as the result of natural selection. Craik (1943) originated the important idea that an explanation of events can be accomplished by mental models that parallel the events in the same way that a calculating machine can parallel physical changes.
Analogies can contribute to explanation at a more specific level, without requiring explicit use of laws or schemas. For example, DARWIN's use of his theory of natural selection to explain evolution frequently invoked the familiar effects of artificial selection by breeders of domesticated animals. Pasteur formed the germ theory of disease by analogy with his earlier explanation that fermentation is caused by bacteria. In analogical explanations, something puzzling is compared to a familiar phenomenon whose causes are known (see ANALOGY).
In both scientific and everyday understanding, there is often more than one possible explanation. Perhaps Mary is working long hours merely because she is a workaholic and prefers working to other activities. One explanation of why the dinosaurs became extinct is that they were killed when an asteroid hit the earth, but acceptance of this hypothesis must compare it with alternative explanations. The term inference to the best explanation refers to acceptance of a hypothesis on the grounds that it provides a better explanation of the evidence than alternative hypotheses (Harman 1986; Lipton 1991; Thagard 1992). Examples of inference to the best explanation include theory choice in science and inferences we make about the mental states of other people. What social psychologists call attribution is inference to the best explanation of a person's behavior (Read and Marcus-Newhall 1993).
Explanations are often useful for improving the performance of human and machine systems. Automated expert systems are sometimes enhanced by giving them the ability to produce computer-generated descriptions of their own operation so that people will be able to understand the inferences underlying their conclusions (Swartout 1983). Chi et al. (1989) found that students learn better when they use "self-explanations" that monitor progress or lack of pro-gress in understanding problems.
Chi, M. T. H., M. Bassok, M. W. Lewis, P. Reimann, and R. Glaser. (1989). Self-explanations: how students study and use examples in learning to solve problems. Cognitive Science 13:145-182.
Craik, K. (1943). The Nature of Explanation. Cambridge: Cambridge University Press.
Friedman, M. (1974). Explanation and scientific understanding. Journal of Philosophy 71:5-19.
Harman, G. (1986). Change in View: Principles of Reasoning. Cambridge, MA: MIT Press/Bradford Books.
Hempel, C. G. (1965). Aspects of Scientific Explanation. New York: Free Press.
Kelley, H. H. (1972). Causal schemata and the attribution process. In E. E. Jones, D. E. Kanouse, H. H. Kelley, R. E. Nisbett, S. Valins, and B. Weiner, Eds., Attribution: Perceiving the Causes of Behavior. Morristown, NJ: General Learning Press.
Kitcher, P. (1989). Explanatory unification and the causal structure of the world. In P. Kitcher and W. C. Salmon (Eds.), Scientific Explanation. Minneapolis: University of Minnesota Press, pp. 410-505.
Kitcher, P. (1993). The Advancement of Science. Oxford: Oxford University Press.
Kitcher, P. and W. Salmon. (1989). Scientific Explanation. Minneapolis: University of Minnesota Press.
Leake, D. B. (1992). Evaluating Explanations: A Content Theory. Hillsdale, NJ: Erlbaum.
Lipton, P. (1991). Inference to the Best Explanation. London: Routledge.
Mitchell, T., R. Keller, and S. Kedar-Cabelli. (1986). Explanation-based generalization: a unifying view. Machine Learning 1:47-80.
Read, S. J., and A. Marcus-Newhall. (1993). Explanatory coherence in social explanations: a parallel distributed processing account. Journal of Personality and Social Psychology 65:429-447.
Salmon, W. (1984). Scientific Explanation and the Causal Structure of the World. Princeton: Princeton University Press.
Salmon, W. C. (1989). Four decades of scientific explanation. In P. Kitcher and W. C. Salmon, Eds., Scientific Explanation (Minnesota Studies in the History of Science, vol. 13. Minneapolis: University of Minnesota Press.
Schank, R. C. (1986). Explanation Patterns: Understanding Mechanically and Creatively. Hillsdale, NJ: Erlbaum.
Swartout, W. (1983). XPLAIN : a system for creating and explaining expert consulting systems. Artificial Intelligence 21:285-325.
Thagard, P. (1992). Conceptual Revolutions. Princeton: Princeton University Press.
Keil, F., and R. Wilson, Eds. (1997). Minds and Machines 8:1-159.
Copyright © 1999 Massachusetts Institute of Technology