Intelligent Agent Architecture

Intelligent agent architecture is a model of an intelligent information-processing system defining its major subsystems, their functional roles, and the flow of information and control among them.

Many complex systems are made up of specialized subsystems that interact in circumscribed ways. In the biological world, for example, organisms have modular sub-systems, such as the circulatory and digestive systems, presumably because nature can improve subsystems more easily when interactions among them are limited (see, for example, Simon 1969). These considerations apply as well to artificial systems: vehicles have fuel, electrical, and suspension subsystems; computers have central-processing, mass-storage, and input-output subsystems; and so on. When variants of a system share a common organization into subsystems, it is often useful to characterize abstractly the elements shared by all variants. For example, a family of integrated circuits might vary in clock speed or specialized data operations, while sharing a basic instruction set and memory model. In the engineering disciplines, the term architecture has come to refer to generic models of shared structure. Architectures serve as templates, allowing designers to develop, refine, test, and maintain complex systems in a disciplined way.

The benefits of architectures apply to the design of intelligent agents as well. An intelligent agent is a device that interacts with its environment in flexible, goal-directed ways, recognizing important states of the environment and acting to achieve desired results. Clearly, when designing a particular agent, many domain-specific features of the environment must be reflected in the detailed design of the agent. Still, the general form of the subsystems underlying intelligent interaction with the environment may carry over from domain to domain. Intelligent agent architectures attempt to capture these general forms and to enforce basic system properties such as soundness of reasoning, efficiency of response, or interruptibility. Many architectures have been proposed that emphasize one or another of these properties, and these architectures can be usefully grouped into three broad categories: the deliberative, the reactive, or the distributed.

The deliberative approach, inspired in part by FOLK PSYCHOLOGY, models agents as symbolic reasoning systems. In this approach, an agent is decomposed into data subsystems that store symbolic, propositional representations, often corresponding to commonsense beliefs, desires, and intentions, and processing subsystems responsible for perception, reasoning, planning, and execution. Some variants of this approach (Genesereth 1983; Russell 1991) emphasize formal methods and resemble approaches from formal philosophy of mind and action, especially with regard to soundness of logical reasoning, KNOWLEDGE REPRESENTATION, and RATIONAL DECISION MAKING. Others (Newell 1990) emphasize memory mechanisms, general PROBLEM SOLVING, and search. Deliberative architectures go beyond folk psychology and formal philosophy by giving concrete computational interpretations to abstract processes of representation and reasoning. Ironically, the literal-minded interpretation of mental objects has also been a source of difficulty in building practical agents: symbolic reasoning typically involves substantial search and is of high COMPUTATIONAL COMPLEXITY, and capturing extensive commonsense knowledge in machine-usable form has proved difficult as well. These problems represent significant challenges to the deliberative approach and have stimulated researchers to investigate other paradigms that might address or sidestep them.

The reactive approach to intelligent-agent design, for example, begins with the intuition that although symbolic reasoning may be a good model for certain cognitive processes, it does not characterize well the information processing involved in routine behavior such as driving, cooking, taking a walk, or manipulating everyday objects. These abilities, simple for humans, remain distant goals for robotics and seem to impose hard real-time requirements on an agent. Although these requirements are not in principle inconsistent with deliberative architectures (Georgeff and Lansky 1987), neither are they guaranteed, and in practice they have not been easily satisfied. Proponents of the reactive approach, therefore, have argued for architectures that insure real-time behavior as part of their fundamental design. Drawing on the mathematical and engineering tradition of feedback control, advocates of reactive architectures model agent and environment as coupled dynamic systems, the inputs of each being the outputs of the other. The agent contains behavioral modules that are self-contained feedback-control systems, each responsible for detecting states of the environment based on sensory data and generating appropriate output. The key is for state-estimation and output calculations to be performed fast enough to keep up with the sampling rates of the system. There is an extensive literature on how to build such behaviors (control systems) when a mathematical description of the environment is available and is of the proper form; reactive architectures advance these traditional control methods by describing how complex behaviors might be built out of simpler ones (Brooks 1986), either by switching among a fixed set of qualitatively different behaviors based on sensed conditions (see Miller,  Galanter, and Pribram 1960 for precursors), by the hierarchical arrangement of behaviors (Albus 1992), or by some more intricate principle of composition. Techniques have also been proposed (Kaelbling 1988) that use off-line symbolic reasoning to derive reactive behavior modules with guaranteed real-time on-line performance.

A third architectural paradigm, explored by researchers in distributed artificial intelligence, is motivated by the following observation. A local subsystem integrating sensory data or generating potential actions may have incomplete, uncertain, or erroneous information about what is happening in the environment or what should be done. But if there are many such local nodes, the information may in fact be present, in the aggregate, to assess a situation correctly or select an appropriate global action policy. The distributed approach attempts to exploit this observation by decomposing an intelligent agent into a network of cooperating, communicating subagents, each with the ability to process inputs, produce appropriate outputs, and store intermediate states. The intelligence of the system as a whole arises from the interactions of all the system's subagents. This approach gains plausibility from the success of groups of natural intelligent agents, for example, communities of humans, who decompose problems and then reassemble the solutions, and from the parallel, distributed nature of neural computation in biological organisms. Although it may be stretching the agent metaphor to view an individual neuron as an intelligent agent, the idea that a collection of units might solve one subproblem while other collections solve others has been an attractive and persistent theme in agent design.

Intelligent-agent research is a dynamic activity and is much influenced by new trends in cognitive science and computing; developments can be anticipated across a broad front. Theoretical work continues on the formal semantics of MENTAL REPRESENTATION, models of behavior composition, and distributed problem solving. Practical advances can be expected in programming tools for building agents, as well as in applications (spurred largely by developments in computer and communications technology) involving intelligent agents in robotics and software.

See also

Additional links

-- Stanley J. Rosenschein

References

Albus, J. S. (1992). RCS: A reference model architecture for intelligent control. IEEE Comput. 25(5):56-59.

Brooks, R. A. (1986). A robust layered control system for a mobile robot. IEEE Trans. Rob. Autom. 2:14-23.

Genesereth, M. R. (1983). An overview of metalevel architecture. Proceedings AAAI 83:119-123.

Georgeff, M., and A. Lansky. (1987). Reactive reasoning and planning. Proceedings AAAI 87.

Kaelbling, L. (1988). Goals as parallel program specification. Proceedings AAAI 88.

Miller, G., E. Galanter, and K. H. Pribram. (1960). Plans and the Structure of Behavior. New York: Henry Holt and Company.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

Russell, S., and E. Wefald. (1991). Do the Right Thing. Cambridge, MA: MIT Press.

Simon, H. A. (1969). The Sciences of the Artificial. Cambridge, MA: MIT Press .