Multiagent systems are distributed computer systems in which the designers ascribe to component modules autonomy, mental state, and other characteristics of agency. Software developers have applied multiagent systems to solve problems in power management, transportation scheduling, and a variety of other tasks. With the growth of the Internet and networked information systems generally, separately designed and constructed programs increasingly need to interact substantively; such complexes also constitute multiagent systems.
In the study of multiagent systems, including the field of "distributed AI" (Bond and Gasser 1988) and much of the current activity in "software agents" (Huhns and Singh 1997), researchers aim to relate aggregate behavior of the composite system with individual behaviors of the component agents and properties of the interaction protocol and environment. Frameworks for constructing and analyzing multiagent systems often draw on metaphors -- as well as models and theories -- from the social and ecological sciences (Huberman 1988). Such social conceptions are sometimes applied within an agent to describe its behaviors in terms of interacting subagents, as in Minsky's society of mind theory (Minsky 1986).
Design of a distributed system typically focuses on the interaction mechanism -- specification of agent communication languages and interaction protocols. The interaction mechanism generally includes means to implement decisions or agreements reached as a function of the agents' interactions. Depending on the context, developers of a distributed system may also control the configuration of participating agents, the INTELLIGENT AGENT ARCHITECTURE, or even the implementation of agents themselves. In any case, principled design of the interaction mechanism requires some model of how agents behave within the mechanism, and design of agents requires a model of the mechanism rules, and (sometimes) models of the other agents.
One fundamental characteristic that bears on design of interaction mechanisms is whether the agents are presumed to be cooperative, which in the technical sense used here means that they have the same objectives (they may have heterogeneous capabilities, and may also differ on beliefs and other agent attitudes). In a cooperative setting, the role of the mechanism is to coordinate local decisions and disseminate local information in order to promote these global objectives. At one extreme, the mechanism could attempt to centralize the system by directing each agent to transmit its local state to a central source, which then treats its problem as a single-agent decision. This approach may be infeasible or expensive, due to the difficulty of aggregating belief states, increased complexity of scale, and the costs and delays of communication. Solving the problem in a decentralized manner, in contrast, forces the designer to deal directly with issues of reconciling inconsistent beliefs and accommodating local decisions made on the basis of partial, conflicting information (Durfee, Lesser, and Corkill 1992).
Even among cooperative agents, negotiation is often necessary to reach joint decisions. Through a negotiation process, for example, agents can convey the relevant informa-tion about their local knowledge and capabilities necessary to determine a principled allocation of resources or tasks among them. In the contract net protocol and its variants, agents submit "bids" describing their abilities to perform particular tasks, and a designated contract manager assigns tasks to agents based on these bids. When tasks are not easily decomposable, protocols for managing shared information in global memory are required. Systems based on a blackboard architecture use this global memory both to direct coordinated actions of the agents and to share intermediate results relevant to multiple tasks.
In a noncooperative setting, objectives as well as beliefs and capabilities vary across agents. Noncooperative systems are the norm when agents represent the interests of disparate humans or human organizations. Note that having distinct objectives does not necessarily mean that the agents are adversarial or even averse to cooperation. It merely means that agents cooperate exactly when they determine that it is in their individual interests to do so.
The standard assumption for noncooperative multiagent systems is that agents behave according to principles of RATIONAL DECISION MAKING. That is, each agent acts to further its individual objectives (typically characterized in terms of UTILITY THEORY), subject to its beliefs and capabilities. In this case, the problem of designing an interaction mechanism corresponds to the standard economic concept of mechanism design, and the mathematical tools of GAME THEORY apply. Much current work in multiagent systems is devoted to game-theoretic analyses of interaction mechanisms, and especially negotiation protocols applied within such mechanisms (Rosenschein and Zlotkin 1994). Economic concepts expressly drive the design of multiagent interaction mechanisms based on market price systems (Clearwater 1996).
Both cooperative and noncooperative agents may derive some benefit by reasoning expressly about the other agents. Cooperative agents may be able to propose more effective joint plans if they know the capabilities and intentions of the other agents. Noncooperative agents can improve their bargaining positions through awareness of the options and preferences of others (agents that exploit such bargaining power are called "strategic"; those that neglect to do so are "competitive"). Because direct knowledge of other agents may be difficult to come by, agents typically induce their models of others from observations (e.g., "plan recognition"), within an interaction or across repeated interactions.
Bond, A. H., and L. Gasser, Eds. (1988). Readings in Distributed Artificial Intelligence. San Francisco: Kaufmann.
Clearwater, S. H., Ed. (1996). Market-Based Control: A Paradigm for Distributed Resource Allocation. Singapore: World Scientific.
Durfee, E. H., V. R. Lesser, and D. D. Corkill. (1992). Distributed problem solving. In Encyclopedia of Artificial Intelligence. 2nd ed. New York: Wiley.
Huberman, B. A., Ed. (1988). The Ecology of Computation. Amsterdam: Elsevier.
Huhns, M., and M. Singh, Eds. (1997). Readings in Agents. San Francisco: Kaufmann.
Minsky, M. (1986). The Society of Mind. New York: Simon and Schuster.
Rosenschein, J. S., and G. Zlotkin. (1994). Rules of Encounter: Designing Conventions for Automated Negotiation among Computers. Cambridge, MA: MIT Press.