The situation
calculus is a language of predicate logic for representing and reasoning
about *action* and *change.* This KNOWLEDGE REPRESENTATION language was originally developed by John McCarthy
(McCarthy 1963; McCarthy and Hayes 1969) and is commonly used in
artificial intelligence for purposes such as predicting the effects
of actions on a system's state, PLANNING actions to
achieve given goals (Green 1969), diagnosing what events might explain
some observations, and analyzing the operation of programs that
perform such tasks. Other formalisms used for the same purposes
include modal logics such as dynamic logic and various logics for TEMPORAL REASONING (Goldblatt 1987), as well as the event calculus
(Kowalski and Sergot 1986).

In the situation calculus, the initial situation of the world or
system under consideration is represented by the constant
*S*_{0}; this is simply understood as a situation where
no actions of interest have yet occurred. The situation that results
from an action *a* being performed in a situation *s* is
represented by the term `do`(`a`, `s`). For
example, *do*(*pickup*(*Rob*,
*Package*_{1}), *S*_{0}) might represent
the situation where the robot *Rob* has performed the action of
picking up the object *Package*_{1} in the initial
situation *S*_{0}; similarly,
*do*(*goTo*(*Rob*, *MailRoom*),
*do*(*pickup*(*Rob*,
*Package*_{1}), *S*_{0})) might
represent the situation where *Rob* subsequently moved to the
*MailRoom*. Thus, a situation is essentially a possible world
history, viewed as a sequence of actions.

The situation calculus provides a way of specifying what the effects of an action are. For example, the axiom

could be used to state that an agent *x* will be holding an
object *o* in the situation that results from *x*
performing the action of picking up *o* in situation *s*
provided that *o* is not heavy. Note that relations whose truth
value depends on the situation, such as *Heavy* and
*Holding*, take a situation argument; such relations are called
*fluents*. From the above axiom and the assumption ¬
*Heavy*(*Package*_{1}, *S*_{0}),
that is, that *Package*_{1} is not heavy initially, one
could conclude that *Holding*(*Rob*,
*Package*_{1},
*do*(*pickup*(*Rob*,
*Package*_{1}), *S*_{0}), that is, that
*Rob* would be holding *Package*_{1} after
picking it up in the initial situation. One can also use the situation
calculus to represent the preconditions of actions, that is, the
conditions under which it is possible to perform the action. Note that
a situation calculus model of a system is just a theory in classical
first-order logic and thus an ordinary theorem proving system can be
used to infer consequences from it. This may be an advantage over the
alternative modal logic formalisms.

Some of the most problematic features of commonsense reasoning manifest themselves in reasoning about action. For instance, although it seems reasonable that a model of a system should include an effect axiom for every action affecting a fluent (actions typically affect very few fluents), one should not have to include an axiom for every fluent that remains unchanged by an action (e.g., that going to a location does not change the color of objects). Yet this is just what is required by a straightforward application of logic to such reasoning. Dealing with the resulting mass of axioms is error-prone and computationally costly. This problem of representing succinctly and computing effectively with the "invariants" of actions has been called the FRAME PROBLEM by McCarthy and Hayes (1969) and has been the subject of much recent research (Elkan 1992; Hanks and McDermott 1986; Pylyshyn 1987; Reiter 1991; Sandewall 1994; Shanahan 1997). It has been one of the major motivations for research on nonmonotonic logics where a conclusion may be invalidated by additional assumptions (Reiter 1987). A nonmonotonic logic might attempt to solve the frame problem by representing the "persistence" assumption that says that fluents remain unaffected by actions unless it is known to be otherwise.

Note that some approaches to reasoning about action or processes such as STRIPS-based planning (Fikes and Nilsson 1971) and most work in process control and software verification may use simple state database update techniques and avoid the frame problem because they assume that one always has complete information about the world state and the effects of actions. But this assumption is rarely justified for intelligent agents (e.g., a robot must acquire information about its environment through its sensors as it operates). This is where a logical approach, with its ability to represent incomplete information, becomes essential.

Another problem viewed as fundamental in the area is called the
*ramification problem*; this concerns the effects of actions
and how they could be represented so as to avoid having to explicitly
mention all indirect effects (e.g., the robot going somewhere changes
the location of the packages it is carrying). Finally, there is also
the *qualification problem* (McCarthy 1977), that is, how to
represent the preconditions of actions when there is in fact a very
large number of things that could prevent an action from happening
(e.g., the robot's being unable to pick up a package because it is
glued to the floor).

The situation calculus continues to be at the center of much knowledge representation research. In recent years, it has been extended to model time and concurrent processes (Gelfond, Lifschitz, and Rabinov 1991; Pinto 1994; Reiter 1996), complex actions (Levesque et al. 1997 ; De Giacomo, Lespérance, and Levesque 1997), knowledge (Moore 1985), and other mental states. It is also at the core of a recently proposed framework for modeling and programming intelligent agents (Lespérance, Levesque, and Reiter 1999).

- LiEP CIS-1998-008: Decision Theory, the Situation Calculus and Conditional Plans.
- LiEP CIS-1998-018: Foundations for the Situation Calculus.

De Giacomo, G., Y. Lespérance, and H. J. Levesque. (1997). Reasoning about concurrent execution, prioritized interrupts, and exogenous actions in the situation calculus. Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, Japan, pp. 1221-1226.

Elkan, C. (1992). Reasoning about action in first-order logic. Proceedings of the Ninth Biennial Conference of the Canadian Society for Computational Studies of Intelligence. San Mateo, CA: Kaufman, pp. 221-227.

Fikes, R. E., and N. J. Nilsson. (1971). STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence 2:189-208.

Gelfond, M., V. Lifschitz, and A. Rabinov. (1991). What are the limitations of the situation calculus? In R. Boyer, Ed., Automated Reasoning: Essays in Honor of Woody Bledsoe. Dordrecht: Kluwer, pp. 167-179.

Goldblatt, R. (1987). Logics of Time and Computation. 2nd ed. CSLI Lecture Notes No. 7, Center for the Study of Language and Information, Stanford University.

Green, C. C. (1969). Theorem proving by resolution as a basis for question-answering systems. In B. Meltzer and D. Michie, Eds., Machine Intelligence 4. Edinburgh: Edinburgh University Press, pp. 183-205.

Hanks, S., and D. McDermott. (1986). Default reasoning, nonmonotonic logics, and the frame problem. Proceedings of the 5th National Conference on Artificial Intelligence. Cambridge, MA: MIT Press, pp. 328-333.

Kowalski, R. A., and M. J. Sergot. (1986). A logic-based calculus of events. New Generation Computing 4:67-95.

Lespérance, Y., H. J. Levesque, and R. Reiter. (1999). A situation calculus approach to modeling and programming agents. In A. Rao and M. Wooldridge, Eds., Foundations and Theories of Rational Agency. Dordrecht: Kluwer.

Levesque, H. J., R. Reiter, Y. Lespérance, F. Lin, and R. B. Scherl. (1997). GOLOG: A logic programming language for dynamic domains. Journal of Logic Programming 31:59-84.

McCarthy, J. (1963). Situations, Actions, and Causal Laws. Memo 2, Stanford University Artificial Intelligence Project.

McCarthy, J. (1977). Epistemological problems of artificial intelligence. Proceedings of the Fifth International Joint Conference on Artificial Intelligence: 1038-1044.

McCarthy, J., and P. Hayes. (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer and D. Michie, Eds., Machine Intelligence 4. Edinburgh: Edinburgh University Press, pp. 463-502.

Moore, R. C. (1985). A formal theory of knowledge and action. In J. R. Hobbs and R. C. Moore, Eds., Formal Theories of the Common Sense World. Norwood, NJ: Ablex, pp. 319-358.

Pinto, J. (1994). Temporal Reasoning in the Situation Calculus. Ph.D. diss., Dept. of Computer Science, University of Toronto.

Pylyshyn, Z. W., Ed. (1987). The Robot's Dilemma: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex.

Reiter, R. (1987). Nonmonotonic reasoning. Annual Reviews of Computer Science 2:147-186.

Reiter, R. (1991). The frame problem in the situation calculus: A simple solution (sometimes) and a completeness result for goal regression. In V. Lifschitz, Ed., Artificial Intelligence and Mathematical Theory of Computation: Papers in Honor of John McCarthy. San Diego: Academic Press, pp. 359-380.

Reiter, R. (1996). Natural actions, concurrency and continuous time in the situation calculus. In Principles of Knowledge Representation and Reasoning: Proceedings of the Fifth International Conference (KR'96). Cambridge, MA, pp. 2-13.

Sandewall, E. (1994). Features and Fluents: The Representation of Knowledge about Dynamical Systems, vol. 1. New York: Oxford University Press.

Shanahan, M. (1997). Solving the Frame Problem: A Mathematical Investigation of the Common Sense Law of Inertia. Cambridge, MA: MIT Press .