Evolutionary Computation

Evolutionary computation is a collection of computational search, learning, optimization, and modeling methods loosely inspired by biological EVOLUTION. The methods most often used are called genetic algorithms (GAs), evolution strategies (ESs), and evolutionary programming (EP). These three methods were developed independently in the 1960s: GAs by Holland (1975), ESs by Rechenberg (1973) and Schwefel (1977), and EP by Fogel, Owens, and Walsh (1966). (Genetic programming, a variant of genetic algorithms, was developed in the 1980s by Koza 1992, 1994.) Such methods are part of a general movement for using biological ideas in computer science that started with pioneers such as VON NEUMANN, TURING, and WIENER, and continues today with evolutionary computation, NEURAL NETWORKS, and methods inspired by the immune system, insect colonies, and other biological systems.

Imitating the mechanisms of evolution has appealed to computer scientists from nearly the beginning of the computer age. Very roughly speaking, evolution can be viewed as searching in parallel among an enormous number of possibilities for "solutions" to the problem of survival in an environment, where the solutions are particular designs for organisms. Viewed from a high level, the "rules" of evolution are remarkably simple: species evolve by means of heritable variation (via mutation, recombination, and other operators), followed by natural selection in which the fittest tend to survive and reproduce, thus propagating their genetic material to future generations. Yet these simple rules are thought to be responsible, in large part, for the extraordinary variety and complexity we see in the biosphere. Seen in this light, the mechanisms of evolution can inspire computational search methods for finding solutions to hard problems in large search spaces or for automatically designing complex systems.

In most evolutionary computation applications, the user has a particular problem to be solved and a way to encode candidate solutions so that the solution space can be searched. For example, in the field of computational protein design, the problem is to design a one-dimensional sequence of amino acids that will fold up into a three-dimensional protein with desired characteristics. Assuming that the sequence is of length l, candidate solutions can be expressed as strings of l amino-acid codes. There are twenty different amino acids, so the number of possible strings is 20l. The user also provides a "fitness function" or "objective function" that assigns a value to each candidate solution measuring its quality.

Evolutionary computation (EC) methods all begin with a population of randomly generated candidate solutions ("individuals"), and perform fitness-based selection and random variation to create a new population. Typically, some number of the highest-fitness individuals are chosen under selection to create offspring for the next generation. Often, an offspring will be produced via a crossover between two or more parents, in which the offspring receives "genetic material" -- different parts of candidate solutions -- from different parents. Typically the offspring is also mutated randomly -- parts of the candidate solution are changed at random. (Mutation and crossover in evolutionary computation are meant to mimic roughly biological mutation and sexual recombination, two main sources of genetic variation.) Offspring are created in this way until a new generation is complete. This process typically iterates for many generations, often ending up with one or more optimal or high-quality individuals in the population.

GA, EP, and ES methods differ in the details of this process. In general, ESs and EP each define fairly specific versions of this process, whereas the term "genetic algorithm," originally referring to a specific algorithm, has come to refer to many considerably different variations of the basic scheme.

ESs were originally formulated to work on real-valued parameter optimization problems, such as airplane wing-shape optimization. They are still most commonly applied to numerical optimization problems. In the original formulation of EP, candidate solutions to given tasks were represented as finite-state machines, which were evolved by randomly mutating their state-transition diagrams and selecting the fittest. Since that time a somewhat broader formulation has emerged. In contrast with ESs and EP, GAs were originally formulated not to solve specific problems, but rather as a means to study formally the phenomenon of adaptation as it occurs in nature and to develop ways in which the mechanisms of natural adaptation might be imported into computer systems. Only after Holland's original theoretical work were GAs adapted to solving optimization problems. Since the early 1990s there has been much cross-fertilization among the three areas, and the original distinctions among GAs, ESs, and EP have blurred considerably in the current use of these labels.

Setting the parameters for the evolutionary process (population size, selection strength, mutation rate, crossover rate, and so on) is often a matter of guesswork and trial and error, though some theoretical and heuristic guidelines have been discovered. An alternative is to have the parameters "self-adapt" -- to change their values automatically over the course of evolution in response to selective pressures. Self-adapting parameters are an intrinsic part of ESs and EP, and are the subject of much research in GAs.

EC methods have been applied widely. Examples of applications include numerical parameter optimization and combinatorial optimization, the automatic design of computer programs, bioengineering, financial prediction, robot learning, evolving production systems for artificial intelligence applications, and designing and training neural networks. In addition to these "problem-solving" applications, EC methods have been used in models of natural systems in which evolutionary processes take place, including economic systems, immune systems, ecologies, biological evolution, evolving systems with adaptive individuals, insect societies, and more complex social systems. (See Mitchell 1996 for an overview of applications in some of these areas.)

Much current research in the EC field is on making the basic EC framework more biologically realistic, both for modeling purposes and in the hope that more realism will improve the search performance of these methods. One approach is incorporating more complex genetic information in individuals in the population, such as sexual differentiation, diploidy, and introns. Another is incorporating additional genetic operators, such as inversion, translocation, and gene doubling and deletion. A third is embedding more complex ecological interactions into the population, such as host-parasite coevolution, symbiosis, sexual selection, and spatial migration. Finally, there has been considerable success in combining EC methods with other types of search methods, such as simple gradient ascent and simulated annealing. Such hybrid algorithms are thought by many to be the best approach to optimization in complex and ill-understood problem spaces (Davis 1991).

EC is relevant for the cognitive sciences both because of its applications in the fields of artificial intelligence and MACHINE LEARNING and because of its use in models of the interaction of evolution and cognitive processes. For example, researchers in EVOLUTIONARY PSYCHOLOGY and other areas have used EC methods in their models of interactions between evolution and LEARNING (e.g., Ackley and Littman 1992; Belew and Mitchell 1996; Miller and Todd 1990; Parisi, Nolfi and Elman 1994; Todd 1996). Likewise, EC methods have been used in models of the relationship between evolution and development (e.g., Belew 1993; Dellaert and Beer 1994). Social scientists and linguists have used EC methods to study, for example, the evolution of cooperation and communication in multiagent systems (e.g., Ackley and Littman 1994; Axelrod 1987; Batali 1994; Batali and Kitcher 1995; Stanley, Ashlock, and Tesfatsion 1994). Many of these models are considered to fall in the purview of the field of ARTIFICIAL LIFE. These examples by no means exhaust the uses of EC in cognitive science, and the literature is growing as interest increases in the role of evolution in shaping cognition and behavior.

Additional links

-- Melanie Mitchell

References

Ackley, D., and M. Littman. (1992). Interactions between learning and evolution. In C. G. Langton, C. Taylor, J. D. Farmer, and S. Rasmussen, Eds., Artificial Life II. Reading, MA: Addison-Wesley, pp. 487-509.

Ackley, D., and M. Littman. (1994). Altruism in the evolution of communication. In R. A. Brooks and P. Maes, Eds., Artificial Life IV. Cambridge, MA: MIT Press, pp. 40-48.

Axelrod, R. (1987). The evolution of strategies in the iterated Prisoner's Dilemma. In L. D. Davis, Ed., Genetic Algorithms and Simulated Annealing, New York: Morgan Kaufmann, pp. 32-41.

Back, T. (1996). Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Al-gorithms. New York: Oxford University Press.

Batali, J. (1994). Innate biases and critical periods: combining evolution and learning in the acquisition of syntax. In R. A. Brooks and P. Maes, Eds., Artificial Life IV. Cambridge, MA: MIT Press, pp. 160-177.

Batali, J., and P. Kitcher. (1995). Evolution of altruism in optional and compulsory games. Journal of Theoretical Biology 175(2): 161.

Belew, R. K. (1993). Interposing an ontogenic model between genetic algorithms and neural networks. In S. J. Hanson, J. D. Cowan, and C. L. Giles, Eds., Advances in Neural Information Processing (NIPS 5). New York: Morgan Kaufmann.

Belew, R. K., and M. Mitchell, Eds. (1996). Adaptive Individuals in Evolving Populations: Models and Algorithms. Reading, MA: Addison-Wesley.

Davis, L. D., Ed. (1991). Handbook of Genetic Algorithms. New York: Van Nostrand Reinhold.

Dellaert, F., and R. D. Beer. (1994). Toward an evolvable model of development for autonomous agent synthesis. In R. A. Brooks and P. Maes, Eds., Artificial Life IV. Cambridge, MA: MIT Press, pp. 246-257.

Fogel, D. B. (1995). Evolutionary Computation: Toward a New Philosophy of Machine Intelligence. Los Angeles: IEEE Press.

Fogel, L. J., A. J. Owens, and M. J. Walsh. (1966). Artificial Intelligence through Simulated Evolution. New York: Wiley.

Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, MA: Addison-Wesley.

Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. Ann Arbor, MI: University of Michigan Press. 2nd ed: MIT Press, 1992.

Koza, J. R. (1992). Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge, MA: MIT Press.

Koza, J. R. (1994). Genetic Programming II: Automatic Discovery of Reusable Programs. Cambridge, MA: MIT Press.

Michalewicz, Z. (1992). Genetic Algorithms + Data Structures = Evolution Programs. New York: Springer.

Miller, G. F., and P. M. Todd. (1990). Exploring adaptive agency I: Theory and methods for simulating the evolution of learning. In D. S. Touretzky, J. L. Elman, T. J. Sejnowski, and G. E. Hinton, Eds., Proceedings of the (1990) Connectionists Models Summer School. New York: Morgan Kaufmann, pp. 65-80.

Mitchell, M. (1996). An Introduction to Genetic Algorithms. Cambridge, MA: MIT Press.

Mitchell, M., and S. Forrest. (1994). Genetic algorithms and artificial life. Artificial Life 1(3):267-289.

Parisi, D., S. Nolfi, and J. L. Elman. (1994). Learning and evolution in neural networks. Adaptive Behavior 3(1):5-28.

Rechenberg, I. (1973). Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution. Stuttgart: Frommann-Holzboog.

Schwefel, H.-P. (1977). Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie. Basel: Birkhauser.

Stanley, E. A., D. Ashlock, and L. Tesfatsion. (1994). Iterated Prisoner's Dilemma with choice and refusal of partners. In C. G. Langton, Ed., Artificial Life III. Reading, MA: Addison-Wesley, pp. 131-175.

Todd, P. M. (1996). Sexual selection and the evolution of learning. In R. K. Belew and M. Mitchell, Eds., Adaptive Individuals in Evolving Populations: Models and Algorithms. Reading, MA: Addison-Wesley, pp. 365-393.

von Neumann, J. (1966). Theory of Self-Reproducing Automata. Ed. A. W. Burks. Urbana: University of Illinois Press.

Further Readings

Holland, J. H., K. J. Holyoak, R. Nisbett, and P. R. Thagard. (1986). Induction: Processes of Inference, Learning and Discovery. Cambridge, MA: MIT Press.

Langton, C. G., Ed. (1995). Artificial Life: An Overview. Cambridge, MA: MIT Press.

Schwefel, H.-P. (1995). Evolution and Optimum Seeking. New York: Wiley.