Deductive Reasoning

Deductive reasoning is a branch of cognitive psychology investigating people's ability to recognize a special relation between statements. Deductive LOGIC is a branch of philosophy and mathematics investigating the same relation. We can call this relation entailment, and it holds between a set of statements (the premises) and a further statement (the conclusion) if the conclusion must be true whenever all the premises are true. Consider the premises "Calvin bites his nails while working" and "Calvin is working" and the conclusion "Calvin is biting his nails." Because the latter statement must be true whenever both the former statements are, these premises entail the conclusion. By contrast, the premises "Calvin bites his nails while working" and "Calvin is biting his nails" does not entail "Calvin is working," inasmuch as it is possible that Calvin bites his nails off the job.

Historically, logicians have constructed systems that describe entailments among statements in some domain of discourse. To compare these systems to human intuition, psychologists present to their subjects arguments (premise-conclusion combinations), some of which embody the target entailments. The psychologists ask the subjects to identify those arguments in which the conclusion "follows logically" from the premises (or in which the conclusion "must be true whenever the premises are true"). Alternatively, the psychologist can present just the premises and ask the subjects to produce a conclusion that logically follows from them. Whether a subject's answer is correct or incorrect is usually determined by comparing it to the dictates of the logic system.

One purpose of investigating people's ability to recognize entailments is to find out what light (if any) entailment sheds on thinking and to use these findings as a basis for revising theories of logic and theories of mind. Given this goal, certain differences between entailments and psychological decisions about them are uninformative. Subjects' inattention, memory limits, and time limits all restrict their success in distinguishing entailments from nonentailments, but factors such as these affect all human thinking and tell us nothing new about deductive reasoning.

Investigating the role of entailment in thought requires some degree of abstraction from everyday cognitive foibles. But it is not always easy to say how far such abstraction should go. According to GRICE (1989), Lewis (1979), and Sperber and Wilson (1986), among others, ordinary conversational settings impose restrictions on what people say, restrictions that can override entailments or supplement entailments (see PRAGMATICS). If Martha says, "Some of my in-laws are honest," we would probably understand her to imply that some of her in-laws are dishonest. This follows from a conversational principle that enjoins her to make the most informative statement available, all else being equal. If Martha believes that all her in-laws are honest, she should have said so; because she did not say so, we infer that she believes not all are honest. We draw this conversational IMPLICATURE even if we recognize that "Some of my in-laws are honest" does not entail "Some of my in-laws are dishonest."

Experimental results suggest that people do not abandon their conversational principles when they become subjects in reasoning experiments (e.g., Fillenbaum 1977; Sperber, Cara, and Girotto 1995). Moreover, conversational implicatures are just one among many types of nondeductive inferences that people routinely employ. In many situations it is satisfactory to reach conclusions that are plausible on the basis of current evidence, but where the conclusion is not necessarily true when the evidence is. It is reasonable to conclude from "Asteroid gamma-315 contains carbon compounds" that "Asteroid gamma-359 contains carbon compounds," even though the first statement might be true and the second false. Attempts to reduce these plausible inferences to entailments have not been successful (as Osherson, Smith, and Shafir 1986 have argued).

Subjects sometimes misidentify these plausible arguments as deductively correct, and psychologists have labeled this tendency a content effect (Evans 1989 contains a review of such effects). These content effects, of course, do not mean that people have no grasp of individual entailments (see, e.g., Braine, Reiser, and Rumain 1984; Johnson-Laird and Byrne 1991; and Rips 1994, for evidence concerning people's mastery of entailments that depend on logical constants such as "and," "if," "or," "not," "for all," and "for some"). Subjects may rely on plausible inferences when it becomes difficult for them to judge whether an entailment holds; they may rely on entailments only when the context pushes them to do so; or they may falsely believe that the experiment calls for plausible inferences rather than for entailments.

However, if there is a principled distinction between entailments and other inference relations and if people routinely fail to observe this distinction, then perhaps they have difficulties with the concept of entailment itself. Some psychologists and some philosophers believe that there is no reasoning process that is distinctive to entailments (e.g., Harman 1986). Some may believe that people (at least those without special logic or math training) have no proper concept of entailment that distinguishes it from other inference relations. The evidence is clouded here, however, by methodological issues (see Cohen 1981). For example, psychologists rarely bother to give their subjects a full explanation of entailment, relying instead on phrases like "logically follows." Perhaps subjects interpret these instructions as equivalent to the vaguer sort of relation indicated in natural language by "therefore" or "thus."

The problem of whether people distinguish entailments is complicated on the logic side by the existence of multiple logic systems (see MODAL LOGIC). There is no one logic that captures all purported entailments, but many proposed systems that formalize different domains. Some systems are supersets of others, adding new logical constants to a core logic in order to describe entailments for concepts like time, belief and knowledge, or permission and obligation. Other systems are rival formulations of the same domain. Psychologists sometimes take subjects' rejection of a specific logic principle as evidence of failure in the subjects' reasoning; however, some such rejections may be the result of an incorrect choice of a logic standard. According to many philosophers (e.g., Goodman 1965), justification of a logic system depends in part on how close it comes to human intuition. If so, subjects' performance may sometimes be grounds for revision in logic.

The variety of logic systems also raises the issue of whether human intuitions about entailment are similarly varied. According to one view, the intuitions belong to a unified set that incorporates the many different types of entailment. Within this set, people may recognize entailments that are specialized for broad domains, such as time, obligation, and so on; but intuitions about each domain are internally consistent. Rival analyses of a specific constant (e.g., "it ought to be the case that . . .") compete for which gives the best account of reasoning. According to a second view, however, there are many different intuitions about entailment, even within a single domain. Rival analyses for "it ought to be the case that. . ." may then correspond to different (psychologically real) concepts of obligation, each with its associated inferences (cf. Lemmon 1959).

The first view lends itself to a theory in which people automatically translate natural language arguments into a single LOGICAL FORM on which inference procedures operate. The second view suggests a more complicated process: when subjects decide whether a natural language argument is deductively correct, they may perform a kind of model-fitting, determining if any of their mental inference packages makes the argument come out right (as Miriam Bassok has suggested, personal communication, 1996). Both views have their advantages, and both deserve a closer look.

See also

Additional links

-- Lance J. Rips

References

Braine, M. D. S., B. J. Reiser, and B. Rumain. (1984). Some empirical justification for a theory of natural propositional reasoning. In G. H. Bower, Ed., Psychology of Learning and Motivation, vol. 18. Orlando: Academic Press.

Cohen, L. J. (1981). Can human irrationality be experimentally demonstrated? Behavioral and Brain Sciences 4:317-370.

Evans, J. St. B. T. (1989). Bias in Human Reasoning. Hillsdale, NJ: Erlbaum.

Fillenbaum, S. (1977). Mind your p's and q's: the role of content and context in some uses of and, or and if. In G. H. Bower, Ed., Psychology of Learning and Motivation, vol. 11. Orlando: Academic Press.

Goodman, N. (1965). Fact, Fiction, and Forecast. 2nd ed. Indianapolis: Bobbs-Merrill.

Grice, H. P. (1989). Studies in the Way of Words. Cambridge, MA: Harvard University Press.

Harman, G. (1986). Change in View. Cambridge, MA: MIT Press.

Johnson-Laird, P. N., and R. M. J. Byrne. (1991). Deduction. Hillsdale, NJ: Erlbaum.

Lemmon, E. J. (1959). Is there only one correct system of modal logic. Proceedings of the Aristotelian Society 23:23-40.

Lewis, D. (1979). Score keeping in a language game. Journal of Philosophical Logic 8:339-359.

Osherson, D. N., E. E. Smith, and E. B. Shafir. (1986). Some origins of belief. Cognition 24:197-224.

Rips, L. J. (1994). The Psychology of Proof. Cambridge, MA: MIT Press.

Sperber, D., F. Cara, and V. Girotto. (1995). Relevance theory explains the selection task. Cognition 57:31-95.

Sperber, D., and D. Wilson. (1986). Relevance. Cambridge, MA: Harvard Press.

Further Readings

Braine, M. D. S., and D. P. O'Brien. (1998). Mental Logic. Mahwah, NJ: Erlbaum.

Cheng, P. W., K. J. Holyoak, R. E. Nisbett, and L. M. Oliver. (1986). Pragmatic versus syntactic approaches to training deductive reasoning. Cognitive Psychology 18:293-328.

Evans, J. St. B. T., S. E. Newstead, and R. M. J. Byrne. (1993). Human Reasoning. Hillsdale, NJ: Erlbaum.

Nisbett, R. E. (1993). Rules for Reasoning. Hillsdale, NJ: Erlbaum.

Oaksford, M., and N. Chater. (1994). A rational analysis of the selection task as optimal data selection. Psychological Review 101:608-631.

Polk, T. A., and A. Newell. (1995). Deduction as verbal reasoning. Psychological Review 102:533-566.