Logical Omniscience, Problem of

Knowers or believers are logically omniscient if they know or believe all of the consequences of their knowledge or beliefs. That is, x is a logically omniscient believer (knower) if and only if the set of all of the propositions believed (known) by x is closed under logical consequence. It is obvious that if belief and knowledge are understood in their ordinary sense, then no nonsupernatural agent, real or artificial, will be logically omniscient. Despite this obvious fact, many formal representations of states of knowledge and belief, and some explanations of what it is to know or believe, have the consequence that agents are logically omniscient. This is why there is a problem of logical omniscience.

There are a number of different formal representations of knowledge and belief that face a problem of logical omniscience. POSSIBLE WORLDS SEMANTICS for knowledge and belief, first developed by Jaakko Hintikka, represent a state of knowledge by a set of possible worlds -- the worlds that are epistemically possible for the knower. According to this analysis, x knows that P if and only if P is true in all epistemically possible worlds. Epistemic models using this kind of analysis have been widely applied by theoretical computer scientists studying distributed systems (see MULTIAGENT SYSTEMS), and by economists studying GAME THEORY. As Hintikka noted from the beginning of the development of semantic models for epistemic logic, this analysis implies that knowers are logically omniscient.

Because all logical truths in any probability function must receive probability one, and because any logical consequences of a proposition P must receive at least as great a probability as P (at least if one holds fixed the context in which probability assessments are made; see RATIONAL DECISION MAKING), any use of probability theory to represent the beliefs and partial beliefs of an agent will face a version of the problem of logical omniscience.

It is not only abstract formal representations, but also some philosophical explanations of the nature of belief and knowledge that seem to imply that knowers and believers are logically omniscient. First, pragmatic or INTENTIONAL STANCE accounts of belief assume that belief and desire are correlative dispositions displayed in rational action. Roughly, according to such accounts, to believe that P is to act in ways that would be apt in situations in which P (together with one's other beliefs) is true. This kind of analysis of belief will imply that believers are logically omniscient. Second, because the logical consequences of any information carried by some state of a system are also information implicit in the state (see also INFORMATIONAL SEMANTICS and PROPOSITIONAL ATTITUDES), any account of knowledge based on INFORMATION THEORY will face a prima facie problem of logical omniscience.

There are two contrasting ways to reconcile a theory implying that agents are logically omniscient with the obvious fact that they are not. First, one may take the theory to represent a special sense of knowledge or belief that diverges from the ordinary one. For example, one may take the theory to be modeling implicit knowledge, understood to include, by definition, all of the consequences of one's knowledge; ordinary knowers are logically omniscient with respect to their implicit knowledge, but have no extraordinary computational powers. Alternatively, one may take the theory to be modeling the knowledge (in the ordinary sense) of an idealized agent, a fictional ideal agent with infinite computational capacities that enable her to make all her implicit knowledge explicit.

Either of these approaches may succeed in reconciling the counterintuitive consequences of theories of belief and knowledge with the phenomena, but there remains a problem, for the first approach, of explaining what explicit knowledge is, and how it is distinguished from merely implicit knowledge. And the second approach must explain the relevance an idealized agent, all of whose implicit beliefs are available, to the behavior of real agents. If the knowledge and beliefs of nonideal agents -- agents who have only BOUNDED RATIONALITY -- are to contribute to an explanation of their behavior, we need to be able to distinguish the beliefs that are accessible or available to the agent, and to do this, we need an account of what it means for a belief to be available. Because a belief might be available to influence the rational behavior of an agent even if the agent is unable to produce or recognize a linguistic expression of the belief, it will not suffice to distinguish articulate beliefs -- those beliefs that an agent can express or to which he is disposed to assent. And because one's beliefs about one's beliefs may themselves be merely implicit, and unavailable, one cannot explain the difference between available and merely implicit belief in terms of higher-order belief.

It may appear to be an advantage of a LANGUAGE OF THOUGHT account of belief that it avoids the problem of logical omniscience. If it is assumed that an agent's explicit beliefs are beliefs encoded in a mental language and stored in the "belief box," then one's theory will not imply that the consequences of the agent's explicit beliefs are also explicit beliefs. But explicit belief in this sense is neither necessary nor sufficient for a plausible notion of accessible or available belief. Although the immediate and obvious consequences of one's explicit beliefs may count intuitively as beliefs in the ordinary sense, thus also as available beliefs even if they are not explicitly represented, beliefs that are explicitly represented may nevertheless remain inaccessible. If the set of explicit beliefs is large, the search required to access an explicit belief could be a nontrivial computational task.

A general problem for the analysis of available belief is that one can distinguish between beliefs that are available or accessible and those that are not only in relation to the particular uses of the belief. Consider the talented but inarticulate chess player whose implicit knowledge of the strategic situation is available to guide her play, but not to explain or justify her choices.

While it is obvious that real agents are never logically omniscient, it is not at all clear how to give a plausible account of knowledge and belief that does not have the consequence that they are.

See also

Additional links

-- Robert Stalnaker

Further Readings

Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT Press.

Fagin, R., J. Y. Halpern, Y. Moses, and M. Y. Vardi. (1995). Reasoning about Knowledge. Cambridge, MA: MIT Press.

Hintikka, J. J. K. (1962). Knowledge and Belief. Ithaca, NY: Cornell University Press.

Levesque, H. J. (1984). A logic of implicit and explicit belief. In Proceedings of the Conference on Artificial Intelligence. Menlo Park, CA: AAAI Press, pp. 188-202.

Lipman, B. L. (1994). An axiomatic approach to the logical omniscience problem. In R. Fagin, Ed., Theoretical Aspects of Reasoning about Knowledge: Proceedings of the Fifth Conference. San Francisco: Morgan Kaufman, pp. 182-196.

Parikh, R. (1987). Knowledge and the problem of logical omniscience. In Z. W. Ras and M. Zemankova, Eds., Methodologies of Intelligent Systems. The Hague: Elsevier, pp. 432-439.

Stalnaker, R. (1991). The problem of logical omniscience, I. Synthese 89:425-440.