14,502 research outputs found

    Distance modulation competitive co-evolution method to find initial configuration independent cellular automata rules

    Get PDF
    IEEE International Conference on Systems, Man, and Cybernetics. Tokyo, 12-15 October 1999.One of the main problems in machine learning methods based on examples is the over-adaptation. This problem supposes the exact adaptation to the training examples losing the capability of generalization. A solution of these problems arises in using large sets of examples. In most of the problems, to achieve generalized solutions, almost infinity examples sets are needed. This make the method useless in practice. In this paper, one way to overcome this problem is proposed, based on biological competitive evolution ideas. The evolution is produced as a result of a competition between sets of solutions and sets of examples, trying to beat each other. This mechanism allows the generation of generalized solutions using short example sets

    Novelty Search in Competitive Coevolution

    Get PDF
    One of the main motivations for the use of competitive coevolution systems is their ability to capitalise on arms races between competing species to evolve increasingly sophisticated solutions. Such arms races can, however, be hard to sustain, and it has been shown that the competing species often converge prematurely to certain classes of behaviours. In this paper, we investigate if and how novelty search, an evolutionary technique driven by behavioural novelty, can overcome convergence in coevolution. We propose three methods for applying novelty search to coevolutionary systems with two species: (i) score both populations according to behavioural novelty; (ii) score one population according to novelty, and the other according to fitness; and (iii) score both populations with a combination of novelty and fitness. We evaluate the methods in a predator-prey pursuit task. Our results show that novelty-based approaches can evolve a significantly more diverse set of solutions, when compared to traditional fitness-based coevolution.Comment: To appear in 13th International Conference on Parallel Problem Solving from Nature (PPSN 2014

    Neural network controller against environment: A coevolutive approach to generalize robot navigation behavior

    Get PDF
    In this paper, a new coevolutive method, called Uniform Coevolution, is introduced to learn weights of a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collisions avoidance. The introduction of coevolutive over evolutionary strategies allows evolving the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method, without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with/without coevolution have been tested in a set of environments and the capability of generalization is shown for each learned behavior. A simulator based on a mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to examples-based problems.Publicad

    Resource Sharing and Coevolution in Evolving Cellular Automata

    Full text link
    Evolving one-dimensional cellular automata (CAs) with genetic algorithms has provided insight into how improved performance on a task requiring global coordination emerges when only local interactions are possible. Two approaches that can affect the search efficiency of the genetic algorithm are coevolution, in which a population of problems---in our case, initial configurations of the CA lattice---evolves along with the population of CAs; and resource sharing, in which a greater proportion of a limited fitness resource is assigned to those CAs which correctly solve problems that fewer other CAs in the population can solve. Here we present evidence that, in contrast to what has been suggested elsewhere, the improvements observed when both techniques are used together depend largely on resource sharing alone.Comment: 8 pages, 1 figure; http://www.santafe.edu/~evca/rsc.ps.g

    Analysing co-evolution among artificial 3D creatures

    Get PDF
    This paper is concerned with the analysis of coevolutionary dynamics among 3D artificial creatures, similar to those introduced by Sims (1). Coevolution is subject to complex dynamics which are notoriously difficult to analyse. We introduce an improved analysis method based on Master Tournament matrices [2], which we argue is both less costly to compute and more informative than the original method. Based on visible features of the resulting graphs, we can identify particular trends and incidents in the dynamics of coevolution and look for their causes. Finally, considering that coevolutionary progress is not necessarily identical to global overall progress, we extend this analysis by cross-validating individuals from different evolutionary runs, which we argue is more appropriate than single-record analysis method for evaluating the global performance of individuals

    Spatial Evolutionary Generative Adversarial Networks

    Full text link
    Generative adversary networks (GANs) suffer from training pathologies such as instability and mode collapse. These pathologies mainly arise from a lack of diversity in their adversarial interactions. Evolutionary generative adversarial networks apply the principles of evolutionary computation to mitigate these problems. We hybridize two of these approaches that promote training diversity. One, E-GAN, at each batch, injects mutation diversity by training the (replicated) generator with three independent objective functions then selecting the resulting best performing generator for the next batch. The other, Lipizzaner, injects population diversity by training a two-dimensional grid of GANs with a distributed evolutionary algorithm that includes neighbor exchanges of additional training adversaries, performance based selection and population-based hyper-parameter tuning. We propose to combine mutation and population approaches to diversity improvement. We contribute a superior evolutionary GANs training method, Mustangs, that eliminates the single loss function used across Lipizzaner's grid. Instead, each training round, a loss function is selected with equal probability, from among the three E-GAN uses. Experimental analyses on standard benchmarks, MNIST and CelebA, demonstrate that Mustangs provides a statistically faster training method resulting in more accurate networks

    A general learning co-evolution method to generalize autonomous robot navigation behavior

    Get PDF
    Congress on Evolutionary Computation. La Jolla, CA, 16-19 July 2000.A new coevolutive method, called Uniform Coevolution, is introduced, to learn weights for a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collision avoidance. The coevolutive method allows the evolution of the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with or without coevolution have been tested in a set of environments and the capability for generalization has been shown for each learned behavior. A simulator based on the mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to example-based problems

    Autonomous virulence adaptation improves coevolutionary optimization

    Get PDF
    corecore