263 research outputs found
Competitive Coevolution through Evolutionary Complexification
Two major goals in machine learning are the discovery and improvement of
solutions to complex problems. In this paper, we argue that complexification,
i.e. the incremental elaboration of solutions through adding new structure,
achieves both these goals. We demonstrate the power of complexification through
the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves
increasingly complex neural network architectures. NEAT is applied to an
open-ended coevolutionary robot duel domain where robot controllers compete
head to head. Because the robot duel domain supports a wide range of
strategies, and because coevolution benefits from an escalating arms race, it
serves as a suitable testbed for studying complexification. When compared to
the evolution of networks with fixed structure, complexifying evolution
discovers significantly more sophisticated strategies. The results suggest that
in order to discover and improve complex solutions, evolution, and search in
general, should be allowed to complexify as well as optimize
Evolution and complexity: the double-edged sword
We attempt to provide a comprehensive answer to the question of whether, and when, an arrow of complexity emerges in Darwinian evolution. We note that this expression can be interpreted in different ways, including a passive, incidental growth, or a pervasive bias towards complexification. We argue at length that an arrow of complexity does indeed occur in evolution, which can be most reasonably interpreted as the result of a passive trend rather than a driven one. What, then, is the role of evolution in the creation of this trend, and under which conditions will it emerge? In the later sections of this article we point out that when certain proper conditions (which we attempt to formulate in a concise form) are met, Darwinian evolution predictably creates a sustained trend of increase in maximum complexity (that is, an arrow of complexity) that would not be possible without it; but if they are not, evolution will not only fail to produce an arrow of complexity, but may actually prevent any increase in complexity altogether. We conclude that, with regard to the growth of complexity, evolution is very much a double-edged sword
Open-ended Search through Minimal Criterion Coevolution
Search processes guided by objectives are ubiquitous in machine learning. They iteratively reward artifacts based on their proximity to an optimization target, and terminate upon solution space convergence. Some recent studies take a different approach, capitalizing on the disconnect between mainstream methods in artificial intelligence and the field\u27s biological inspirations. Natural evolution has an unparalleled propensity for generating well-adapted artifacts, but these artifacts are decidedly non-convergent. This new class of non-objective algorithms induce a divergent search by rewarding solutions according to their novelty with respect to prior discoveries. While the diversity of resulting innovations exhibit marked parallels to natural evolution, the methods by which search is driven remain unnatural. In particular, nature has no need to characterize and enforce novelty; rather, it is guided by a single, simple constraint: survive long enough to reproduce. The key insight is that such a constraint, called the minimal criterion, can be harnessed in a coevolutionary context where two populations interact, finding novel ways to satisfy their reproductive constraint with respect to each other. Among the contributions of this dissertation, this approach, called minimal criterion coevolution (MCC), is the primary (1). MCC is initially demonstrated in a maze domain (2) where it evolves increasingly complex mazes and solutions. An enhancement to the initial domain (3) is then introduced, allowing mazes to expand unboundedly and validating MCC\u27s propensity for open-ended discovery. A more natural method of diversity preservation through resource limitation (4) is introduced and shown to maintain population diversity without comparing genetic distance. Finally, MCC is demonstrated in an evolutionary robotics domain (5) where it coevolves increasingly complex bodies with brain controllers to achieve principled locomotion. The overall benefit of these contributions is a novel, general, algorithmic framework for the continual production of open-ended dynamics without the need for a characterization of behavioral novelty
The Self-Organization of Interaction Networks for Nature-Inspired Optimization
Over the last decade, significant progress has been made in understanding complex biological systems, however there have been few attempts at incorporating this knowledge into nature inspired optimization algorithms. In this paper, we present a first attempt at incorporating some of the basic structural properties of complex biological systems which are believed to be necessary preconditions for system qualities such as robustness. In particular, we focus on two important conditions missing in Evolutionary Algorithm populations; a self-organized definition of locality and interaction epistasis. We demonstrate that these two features, when combined, provide algorithm behaviors not observed in the canonical Evolutionary Algorithm or in Evolutionary Algorithms with structured populations such as the Cellular Genetic Algorithm. The most noticeable change in algorithm behavior is an unprecedented capacity for sustainable coexistence of genetically distinct individuals within a single population. This capacity for sustained genetic diversity is not imposed on the population but instead emerges as a natural consequence of the dynamics of the system
The Self-Organization of Interaction Networks for Nature-Inspired Optimization
Over the last decade, significant progress has been made in understanding
complex biological systems, however there have been few attempts at
incorporating this knowledge into nature inspired optimization algorithms. In
this paper, we present a first attempt at incorporating some of the basic
structural properties of complex biological systems which are believed to be
necessary preconditions for system qualities such as robustness. In particular,
we focus on two important conditions missing in Evolutionary Algorithm
populations; a self-organized definition of locality and interaction epistasis.
We demonstrate that these two features, when combined, provide algorithm
behaviors not observed in the canonical Evolutionary Algorithm or in
Evolutionary Algorithms with structured populations such as the Cellular
Genetic Algorithm. The most noticeable change in algorithm behavior is an
unprecedented capacity for sustainable coexistence of genetically distinct
individuals within a single population. This capacity for sustained genetic
diversity is not imposed on the population but instead emerges as a natural
consequence of the dynamics of the system
Balancing Selection Pressures, Multiple Objectives, and Neural Modularity to Coevolve Cooperative Agent Behavior
Previous research using evolutionary computation in Multi-Agent Systems
indicates that assigning fitness based on team vs.\ individual behavior has a
strong impact on the ability of evolved teams of artificial agents to exhibit
teamwork in challenging tasks. However, such research only made use of
single-objective evolution. In contrast, when a multiobjective evolutionary
algorithm is used, populations can be subject to individual-level objectives,
team-level objectives, or combinations of the two. This paper explores the
performance of cooperatively coevolved teams of agents controlled by artificial
neural networks subject to these types of objectives. Specifically, predator
agents are evolved to capture scripted prey agents in a torus-shaped grid
world. Because of the tension between individual and team behaviors, multiple
modes of behavior can be useful, and thus the effect of modular neural networks
is also explored. Results demonstrate that fitness rewarding individual
behavior is superior to fitness rewarding team behavior, despite being applied
to a cooperative task. However, the use of networks with multiple modules
allows predators to discover intelligent behavior, regardless of which type of
objectives are used
Beyond Alignment: A Coevolutionary View of the Information Systems Strategy Process
How do organizations achieve and sustain the process of continuous adaptation and change that is necessary to realize strategic information systems alignment? While research has focused on developing deterministic alignment models and on identifying the factors that contribute to alignment, there is little understanding of the process as it evolves over time. In this paper, we propose that coevolution theory offers the opportunity to explore coevolving interactions, interrelationships, and effects as both IS and business strategies evolve. An initial model of this coevolution is presented that applies the key attributes and concepts of coevolution theory to strategic IS alignment. Future directions for advancing our work are highlighted
Evolving controllers for simulated car racing
This paper describes the evolution of controllers for racing a simulated radio-controlled car around a track, modelled on a real physical track. Five different controller architectures were compared, based on neural networks, force fields and action sequences. The controllers use either egocentric (first person), Newtonian (third person) or no information about the state of the car (open-loop controller). The only controller that is able to evolve good racing behaviour is based on a neural network acting on egocentric inputs
Coevolution of Generative Adversarial Networks
Generative adversarial networks (GAN) became a hot topic, presenting
impressive results in the field of computer vision. However, there are still
open problems with the GAN model, such as the training stability and the
hand-design of architectures. Neuroevolution is a technique that can be used to
provide the automatic design of network architectures even in large search
spaces as in deep neural networks. Therefore, this project proposes COEGAN, a
model that combines neuroevolution and coevolution in the coordination of the
GAN training algorithm. The proposal uses the adversarial characteristic
between the generator and discriminator components to design an algorithm using
coevolution techniques. Our proposal was evaluated in the MNIST dataset. The
results suggest the improvement of the training stability and the automatic
discovery of efficient network architectures for GANs. Our model also partially
solves the mode collapse problem.Comment: Published in EvoApplications 201
- …