Epistemology and cognition is the confluence of the philosophy of knowledge and the science of cognition. Epistemology is concerned with the prospects for human knowledge, and because these prospects depend on the powers and frailties of our cognitive equipment, epistemology must work hand in hand with cognitive science. Epistemology centers on normative or evaluative questions about cognition: what are the good or right ways to think, reason, and form beliefs? But normative assessments of cognitive systems and activities must rest on their descriptive properties, and characterizing those descriptive properties is a task for cognitive science. Historically rationalism and empiricism exemplified this approach by assigning different values to reason and the senses based on different descriptions of their capacities.
Epistemic evaluation can take many forms. First, it can evaluate entire cognitive systems, selected cognitive subsystems, or particular cognitive performances. Second, there are several possible criteria of epistemic assessment. (1) A system or process might be judged by its accuracy or veridicality, including its reliability -- the proportion of true judgments it generates -- and its power -- the breadth of tasks or situations in which it issues accurate judgments (Goldman 1986). (2) It can be judged by its conformity or nonconformity with normatively approved formal standards, such as deductive validity or probabilistic coherence. (3) It might be evaluated by its adaptiveness, or conduciveness for achieving desire-satisfaction or goal-attainment (Stich 1990). Normative assessments of these kinds are not only of theoretical interest, but also admit of several types of application. You might judge another person's belief to be untrustworthy if the process productive of that belief is unreliable, or you might deploy a powerful intellectual strategy to improve your own cognitive attainments.
To illustrate the reliability and power criteria, as well as the two types of application just mentioned, consider MEMORY and its associated processes (Schacter 1996). Studies partly prompted by "recovered" memory claims show that memory is strongly susceptible to postevent distortions, both in adults and especially in children. Suggestive questioning of preschool children can have devastating effects on the reliability of their memories (Ceci 1995). Knowing that someone underwent suggestive questioning should give third parties grounds for distrusting related memories. Another application is the use of encoding strategies to boost memory power. A runner hit on the technique of coding a series of digits in terms of running times. After months of practice with this coding strategy, he could recall over eighty digits in correct order after being exposed to them only once (Chase and Ericsson 1981). Analogous encoding strategies can help anyone increase his memory power. A third memory example illustrates tradeoffs between different epistemic standards. Reliable recollection often depends on "source memory," the recall of how you encountered an object or event. Witnesses have erroneously identified alleged criminals because they had seen them outside the context of the crime, for example, on television. A sense of familiarity was retained, but the source of this familiarity was forgotten (Thomson 1988). People often fail to keep track of the origins of their experience or beliefs. Is this epistemically culpable? The adaptiveness criterion suggests otherwise: forgetting sources is an economical response to the enormous demands on memory (Harman 1986).
The formal-standards criterion of epistemic normativity is applied to both DEDUCTIVE REASONING and PROBABILISTIC REASONING. It is unclear exactly which formal standards are suitable criteria for epistemic rationality. Must a cognitive system possess all sound rules of a natural deduction system to qualify as deductively rational? Or would many such rules suffice? Must these rules be natively endowed, or would it suffice that the system acquires them under appropriate experience? Whether human deductive capacities qualify as rational depends on which normative criterion is chosen, as well as on the descriptive facts concerning these capacities, about which there is ongoing controversy.
One psychological approach says that people's deductive competence does come in the form of abstract rules akin to natural deduction rules (Rips 1994). On this view, people's native endowments might well qualify as rational, at least under the weaker criterion ("many rules") mentioned above. Other approaches deny that people begin with purely abstract deductive principles, a conclusion supported by content effects discovered in connection with Wason's selection task. Cheng and Holyoak (1985) suggest that generalized rules are induced from experience because of their usefulness. They explain (modest) deductive competence by reference to inductive capacities for acquiring rules, thereby allowing for rationality under one of the foregoing proposals. Cosmides (1989) contends that evolution provided us with specific contentful rules, ones useful in the context of social exchange. Reasoning capacities might qualify as rational under the social exchange approach if the criterion of adaptiveness is applied.
The dominant approach to probabilistic reasoning is the judgments and heuristics (or "heuristics and biases") approach (Tversky and Kahneman 1974; see JUDGMENT HEURISTICS). Its proponents deny that people reason by means of normatively appropriate rules. Instead, people allegedly use shortcuts such as the "representativeness heuristic," which can yield violations of normative rules such as nonutilization of base rates and commission of the conjunction fallacy (Tversky and Kahneman 1983). If someone resembles the prototype of a feminist bank teller more than the prototype of a bank teller, subjects tend to rate the probability of her being both a feminist and a bank teller higher than the probability of her being a bank teller. According to the probability calculus, however, a conjunction cannot have a higher probability than one of its conjuncts.
Recent literature challenges the descriptive claims of the heuristics approach as well as its normative conclusions. Gigerenzer (1991) finds that many so-called cognitive biases or illusions "disappear" when tasks are presented in frequentist terms. People do understand probabilities, but only in connection with relative frequencies, not single cases. Koehler (1996) surveys the literature on the base-rate fallacy and disputes on empirical grounds the conventional wisdom that base rates are routinely ignored. He adds that because base rates are not generally equivalent to prior probabilities, a Bayesian normative standard does not mandate such heavy emphasis on base rates. It is not easy to decide, then, whether people have a general competence at probabilistic reasoning, or the circumstances in which such a competence will be manifested. A related subject is the forms of teaching that can successfully train people in normatively proper reasoning (Nisbett 1993).
Epistemologists are traditionally interested in deciding which beliefs or classes of belief meet the standard for knowledge, where knowledge includes at least true justified belief. The crucial normative notion here is JUSTIFICATION (rather than rationality). Can cognitive science help address this question? One affirmative answer is supported by a reliable process theory of justification (Goldman 1986). If a justified belief is (roughly) a belief produced by reliable cognitive processes, that is, processes that usually output truths, then cognitive science can assist epistemology by determining which mental processes are reliable. Reliabilism is not the only theory of justification, however, that promotes a tight link between epistemology and cognitive science. Any theory can do so that emphasizes the cognitive sources or context of belief.
Assuming that humans do have extensive knowledge, both epistemologists and cognitive scientists ask how such knowledge is possible. During much of the twentieth century, philosophers and psychologists assumed that general-purpose, domain-neutral learning mechanisms, such as deductive and inductive reasoning, were responsible. The most influential approach in current cognitive science, however, is DOMAIN SPECIFICITY (Hirschfeld and Gelman 1994). On this view, the mind is less an all-purpose problem solver than a collection of independent subsystems designed to perform circumscribed tasks. Whether in language, vision, FOLK PSYCHOLOGY, or other domains, special-purpose modules have been postulated. As the philosopher Goodman (1955) emphasized, wholly unconstrained INDUCTION leads to indeterminacy or antinomy. To resolve this indeterminacy, cognitive scientists have sought to identify constraints on learning or representation in each of multiple domains (Keil 1981; Gelman 1990).
Ceci, S. (1995). False beliefs: Some developmental and clinical considerations. In D. Schacter, J. Coyle, G. Fischbach, M. Mesulam, and L. Sullivan, Eds., Memory Distortion. Cambridge, MA: Harvard University Press, pp. 91-128
Chase, W., and K. Ericsson. (1981). Skilled memory. In J. Anderson, Ed., Cognitive Skills and Their Acquisition. Hillsdale, NJ: Erlbaum.
Cheng, P., and K. Holyoak. (1985). Pragmatic reasoning schemas. Cognitive Psychology 17:391-416.
Cosmides, L. (1989). The logic of social exchange: Has natural selection shaped how humans reason? Cognition 31:187-276.
Gelman, R. (1990). First principles organize attention to and learning about relevant data: Number and the animate-inanimate distinction as examples. Cognitive Science 14:79-106.
Gigerenzer, G. (1991). How to make cognitive illusions disappear: Beyond "heuristics and biases." European Review of Social Psychology 2:83-115.
Goldman, A. (1986). Epistemology and Cognition. Cambridge, MA: Harvard University Press.
Goodman, N. (1955). Fact, Fiction, and Forecast. Cambridge, MA: Harvard University Press.
Harman, G. (1986). Change in View. Cambridge, MA: MIT Press.
Hirschfeld, L., and S. Gelman, Eds. (1994). Mapping the Mind: Domain Specificity in Cognition and Culture. Cambridge: Cambridge University Press.
Keil, F. (1981). Constraints on knowledge and cognitive development. Psychological Review 88:197-227.
Koehler, J. (1996). The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges. Behavioral and Brain Sciences 19:1-17.
Nisbett, R., Ed. (1993). Rules for Reasoning. Hillsdale, NJ: Erlbaum.
Rips, L. (1994). The Psychology of Proof. Cambridge, MA: MIT Press.
Schacter, D. (1996). Searching for Memory. New York: Basic Books.
Stich, S. (1990). The Fragmentation of Reason. Cambridge, MA: MIT Press.
Thomson, D. (1988). Context and false recognition. In G. Davies and D. Thomson, Eds., Memory in Context: Context in Memory, Chichester, England: Wiley, pp. 285-304.
Tversky, A., and D. Kahneman. (1974). Judgment under uncertainty: Heuristics and biases. Science 185:1124-1131.
Tversky, A., and D. Kahneman. (1983). Extensional vs. intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review 91:293-315.
Carey, S. (1985). Conceptual Change in Childhood. Cambridge, MA: MIT Press.
Cherniak, C. (1986). Minimal Rationality. Cambridge, MA: MIT Press.
Churchland, P. (1989). A Neurocomputational Perspective. Cambridge, MA: MIT Press.
Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT Press.
Fodor, J. (1983). Modularity of Mind. Cambridge, MA: MIT Press.
Gilovich, T. (1991). How We Know What Isn't So. New York: Free Press.
Goldman, A. (1993). Philosophical Applications of Cognitive Science. Boulder, CO: Westview Press.
Gopnik, A., and A. Meltzoff. (1997). Words, Thoughts, and Theories. Cambridge, MA: MIT Press.
Johnson-Laird, P., and R. Byrne. (1991). Deduction. Hillsdale, NJ: Erlbaum.
Karmiloff-Smith, A. (1992). Beyond Modularity. Cambridge, MA: MIT Press.
Kornblith, H., Ed. (1993). Naturalizing Epistemology. 2nd ed. Cambridge, MA: MIT Press.
Nisbett, R., and L. Ross. (1980). Human Inference. Englewood Cliffs, NJ: Prentice-Hall.
Pollock, J. (1995). Cognitive Carpentry. Cambridge, MA: MIT Press.
Spelke, E., K. Breinlinger, J. Macomber, and K. Jacobson. (1992). Origins of knowledge. Psychological Review 99:605-632.
Sperber, D., D. Premack, and A. Premack, Eds. (1995). Causal Cognition. New York: Oxford University Press.
Stein, E. (1996). Without Good Reason. Oxford: Oxford University Press.
Thagard, P. (1992). Conceptual Revolutions. Princeton, NJ: Princeton University Press.