Induction

Induction is a kind of inference that introduces uncertainty, in contrast to DEDUCTIVE REASONING in which the truth of a conclusion follows necessarily from the truth of the premises. The term induction sometimes has a narrower meaning to describe a particular kind of inference to a generalization, for example from "All the cognitive scientists I've met are intelligent" to "All cognitive scientists are intelligent." In the broader sense, induction includes all kinds of nondeductive LEARNING, including concept formation, ANALOGY, and the generation and acceptance of hypotheses (abduction).

The traditional philosophical problem of induction is whether inductive inference is legitimate. Because induction involves uncertainty, it may introduce error: no matter how many intelligent cognitive scientists I have encountered, one still might turn up who is not intelligent. In the eighteenth century, DAVID HUME asked how people could be justified in believing that the future will be like the past and concluded that they cannot: we use induction out of habit but with no legitimate basis. Because no deductive justification of induction is available, and any inductive justification would be circular, it seems that induction is a dubious source of knowledge. Rescher (1980) offers a pragmatic justification of induction, arguing that it is the best available means for accomplishing our cognitive ends. Induction usually works, and even when it leads us into error, there is no method of thinking that would work better.

In the 1950s, Nelson Goodman (1983) dissolved the traditional problem of induction by pointing out that the validity of deduction consists in conformity to valid deductive principles at the same time that deductive principles are evaluated according to deductive practice. Justification is then just a matter of finding a coherent fit between inferential practice and inferential rules. Similarly, inductive inference does not need any general justification but is a matter of finding a set of inductive principles that fit well with inductive practice after a process of improving principles to fit with practice and improving practice to fit with principles. Instead of the old problem of coming up with an absolute justification of induction, we have a new problem of compiling a set of good inductive principles.

The task that philosophers of induction face is therefore very similar to the projects of researchers in psychology and artificial intelligence who are concerned with learning in humans and machines. Psychologists have investigated a wide array of inductive behavior, including rule learning in rats, category formation, formation of models of the social and physical worlds, generalization, learning inferential rules, and analogy (Holland et al. 1986). AI researchers have developed computational models of many kinds of MACHINE LEARNING, including learning from examples and EXPLANATION-BASED LEARNING, which relies heavily on background knowledge (Langley 1996; Mitchell 1997).

Most philosophical work on induction, however, has tended to ignore psychological and computational issues. Following Carnap (1950), much research has been concerned with applying and developing probability theory. For example, Howson and Urbach (1989) use Bayesian probability theory to describe and explain inductive inference in science. Similarly, AI research has investigated inference in BAYESIAN NETWORKS (Pearl 1988). In contrast, Thagard (1992) offers a more psychologically oriented view of scientific induction, viewing theory choice as a process of parallel constraint satisfaction that can be modeled using connectionist networks. There is room for both psychological and nonpsychological investigations of induction in philosophy and artificial intelligence: the former are concerned with how people do induction, and the latter pursue the question of how probability theory and other mathematical methods can be used to perform differently and perhaps better than people typically do. It is possible, however, that psychologically motivated connectionist approaches to learning may approximate to optimal reasoning.

Here are some of the inductive tasks that need to be understood from a combination of philosophical, psychological, and computational perspectives:

  1. Concept learning. Given a set of examples and a set of prior concepts, formulate new CONCEPTS that effectively describe the examples. A student entering the university, for example, needs to form new concepts that describe kinds of courses, professors, and students.
  2. Rule learning. Given a set of examples and a set of prior rules, formulate new rules that improve problem solving. For example, a student might generalize that early morning classes are hard to get to. According to some linguists, LANGUAGE ACQUISITION is essentially a matter of learning rules.
  3. Hypothesis formation. Given a puzzling occurrence such as a friend's not showing up for a date, generate hypotheses about why this happened. Pick the best hypothesis, which might be done probabilistically or qualitatively by considering which hypothesis is the best explanation. Forming and evaluating explanatory hypotheses is a kind of CAUSAL REASONING. Medical diagnosis is one kind of hypothesis formation.
  4. Analogical inference. To solve a given target problem, look for a similar problem that can be adapted to infer a possible solution to the target problem. ANALOGY and hypothesis formation are often particularly risky kinds of induction, inasmuch as they both tend to involve substantial leaps beyond the information given and introduce much uncertainty; alternative analogies and hypotheses will always be possible. Nevertheless, these risky kinds of induction are immensely valuable to everyday and scientific thought, because they can bring new creative insights that induction of rules and concepts from examples could never provide.

How much of human thinking is deductive and how much is inductive? No data are available to answer this question, but if Harman (1986) is right that inference is always a matter of coherence, then all inference is inductive. He points out that the deductive rule of modus ponens, from P and if P then Q to infer Q, does not tell us that we should infer Q from P and if P then Q; sometimes we should give up P or if P then Q instead, depending on how these beliefs and Q cohere with our other beliefs. Many kinds of inductive inference can be interpreted as maximizing coherence using parallel constraint satisfaction (Thagard and Verbeurgt 1998).

See also

Additional links

-- Paul Thagard

References

Carnap, R. (1950). Logical Foundations of Probability. Chicago: University of Chicago Press.

Goodman, N. (1983). Fact, Fiction and Forecast. 4th ed. Indianapolis: Bobbs-Merrill.

Harman, G. (1986). Change in View: Principles of Reasoning. Cambridge, MA: MIT Press/Bradford Books.

Holland, J. H., K. J. Holyoak, R. E. Nisbett, and P. R. Thagard. (1986). Induction: Processes of Inference, Learning, and Discovery. Cambridge, MA: MIT Press/Bradford Books.

Howson, C., and P. Urbach. (1989). Scientific Reasoning: The Bayesian Tradition. Lasalle, IL: Open Court.

Langley, P. (1996). Elements of Machine Learning. San Francisco: Morgan Kaufmann.

Mitchell, T. (1997). Machine Learning. New York: McGraw-Hill.

Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems. San Mateo: Morgan Kaufman.

Rescher, N. (1980). Induction. Oxford: Blackwell.

Thagard, P. (1992). Conceptual Revolutions. Princeton: Princeton University Press.

Thagard, P., and K. Verbeurgt. (1998). Coherence as constraint sat isfaction. Cognitive Science 22:1-24 .