256 research outputs found

    Evolvability signatures of generative encodings: beyond standard performance benchmarks

    Full text link
    Evolutionary robotics is a promising approach to autonomously synthesize machines with abilities that resemble those of animals, but the field suffers from a lack of strong foundations. In particular, evolutionary systems are currently assessed solely by the fitness score their evolved artifacts can achieve for a specific task, whereas such fitness-based comparisons provide limited insights about how the same system would evaluate on different tasks, and its adaptive capabilities to respond to changes in fitness (e.g., from damages to the machine, or in new situations). To counter these limitations, we introduce the concept of "evolvability signatures", which picture the post-mutation statistical distribution of both behavior diversity (how different are the robot behaviors after a mutation?) and fitness values (how different is the fitness after a mutation?). We tested the relevance of this concept by evolving controllers for hexapod robot locomotion using five different genotype-to-phenotype mappings (direct encoding, generative encoding of open-loop and closed-loop central pattern generators, generative encoding of neural networks, and single-unit pattern generators (SUPG)). We observed a predictive relationship between the evolvability signature of each encoding and the number of generations required by hexapods to adapt from incurred damages. Our study also reveals that, across the five investigated encodings, the SUPG scheme achieved the best evolvability signature, and was always foremost in recovering an effective gait following robot damages. Overall, our evolvability signatures neatly complement existing task-performance benchmarks, and pave the way for stronger foundations for research in evolutionary robotics.Comment: 24 pages with 12 figures in the main text, and 4 supplementary figures. Accepted at Information Sciences journal (in press). Supplemental videos are available online at, see http://goo.gl/uyY1R

    Linear combination of one-step predictive information with an external reward in an episodic policy gradient setting: a critical analysis

    Get PDF
    One of the main challenges in the field of embodied artificial intelligence is the open-ended autonomous learning of complex behaviours. Our approach is to use task-independent, information-driven intrinsic motivation(s) to support task-dependent learning. The work presented here is a preliminary step in which we investigate the predictive information (the mutual information of the past and future of the sensor stream) as an intrinsic drive, ideally supporting any kind of task acquisition. Previous experiments have shown that the predictive information (PI) is a good candidate to support autonomous, open-ended learning of complex behaviours, because a maximisation of the PI corresponds to an exploration of morphology- and environment-dependent behavioural regularities. The idea is that these regularities can then be exploited in order to solve any given task. Three different experiments are presented and their results lead to the conclusion that the linear combination of the one-step PI with an external reward function is not generally recommended in an episodic policy gradient setting. Only for hard tasks a great speed-up can be achieved at the cost of an asymptotic performance lost

    Fast Damage Recovery in Robotics with the T-Resilience Algorithm

    Full text link
    Damage recovery is critical for autonomous robots that need to operate for a long time without assistance. Most current methods are complex and costly because they require anticipating each potential damage in order to have a contingency plan ready. As an alternative, we introduce the T-resilience algorithm, a new algorithm that allows robots to quickly and autonomously discover compensatory behaviors in unanticipated situations. This algorithm equips the robot with a self-model and discovers new behaviors by learning to avoid those that perform differently in the self-model and in reality. Our algorithm thus does not identify the damaged parts but it implicitly searches for efficient behaviors that do not use them. We evaluate the T-Resilience algorithm on a hexapod robot that needs to adapt to leg removal, broken legs and motor failures; we compare it to stochastic local search, policy gradient and the self-modeling algorithm proposed by Bongard et al. The behavior of the robot is assessed on-board thanks to a RGB-D sensor and a SLAM algorithm. Using only 25 tests on the robot and an overall running time of 20 minutes, T-Resilience consistently leads to substantially better results than the other approaches

    Evolving a Behavioral Repertoire for a Walking Robot

    Full text link
    Numerous algorithms have been proposed to allow legged robots to learn to walk. However, the vast majority of these algorithms is devised to learn to walk in a straight line, which is not sufficient to accomplish any real-world mission. Here we introduce the Transferability-based Behavioral Repertoire Evolution algorithm (TBR-Evolution), a novel evolutionary algorithm that simultaneously discovers several hundreds of simple walking controllers, one for each possible direction. By taking advantage of solutions that are usually discarded by evolutionary processes, TBR-Evolution is substantially faster than independently evolving each controller. Our technique relies on two methods: (1) novelty search with local competition, which searches for both high-performing and diverse solutions, and (2) the transferability approach, which com-bines simulations and real tests to evolve controllers for a physical robot. We evaluate this new technique on a hexapod robot. Results show that with only a few dozen short experiments performed on the robot, the algorithm learns a repertoire of con-trollers that allows the robot to reach every point in its reachable space. Overall, TBR-Evolution opens a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot.Comment: 33 pages; Evolutionary Computation Journal 201

    Using Centroidal Voronoi Tessellations to Scale Up the Multi-dimensional Archive of Phenotypic Elites Algorithm

    Get PDF
    The recently introduced Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) is an evolutionary algorithm capable of producing a large archive of diverse, high-performing solutions in a single run. It works by discretizing a continuous feature space into unique regions according to the desired discretization per dimension. While simple, this algorithm has a main drawback: it cannot scale to high-dimensional feature spaces since the number of regions increase exponentially with the number of dimensions. In this paper, we address this limitation by introducing a simple extension of MAP-Elites that has a constant, pre-defined number of regions irrespective of the dimensionality of the feature space. Our main insight is that methods from computational geometry could partition a high-dimensional space into well-spread geometric regions. In particular, our algorithm uses a centroidal Voronoi tessellation (CVT) to divide the feature space into a desired number of regions; it then places every generated individual in its closest region, replacing a less fit one if the region is already occupied. We demonstrate the effectiveness of the new "CVT-MAP-Elites" algorithm in high-dimensional feature spaces through comparisons against MAP-Elites in maze navigation and hexapod locomotion tasks

    Biologically Inspired Robots

    Get PDF

    Intelligent approaches in locomotion - a review

    Get PDF

    On the Evolutionary Co-Adaptation of Morphology and Distributed Neural Controllers in Adaptive Agents

    Get PDF
    The attempt to evolve complete embodied and situated artificial creatures in which both morphological and control characteristics are adapted during the evolutionary process has been and still represents a long term goal key for the artificial life and the evolutionary robotics community. Loosely inspired by ancient biological organisms which are not provided with a central nervous system and by simple organisms such as stick insects, this thesis proposes a new genotype encoding which allows development and evolution of mor- phology and neural controller in artificial agents provided with a distributed neural network. In order to understand if this kind of network is appropriate for the evolution of non trivial behaviours in artificial agents, two experiments (description and results will be shown in chapter 3) in which evolution was applied only to the controller’s parameters were performed. The results obtained in the first experiment demonstrated how distributed neural networks can achieve a good level of organization by synchronizing the output of oscillatory elements exploiting acceleration/deceleration mechanisms based on local interactions. In the second experiment few variants on the topology of neural architecture were introduced. Results showed how this new control system was able to coordinate the legs of a simulated hexapod robot on two different gaits on the basis of the external circumstances. After this preliminary and successful investigation, a new genotype encoding able to develop and evolve artificial agents with no fixed morphology and with a distributed neural controller was proposed. A second set of experiments was thus performed and the results obtained confirmed both the effectiveness of genotype encoding and the ability of distributed neural network to perform the given task. The results have also shown the strength of genotype both in generating a wide range of different morphological structures and in favouring a direct co-adaptation between neural controller and morphology during the evolutionary process. Furthermore the simplicity of the proposed model has showed the effective role of specific elements in evolutionary experiments. In particular it has demonstrated the importance of the environment and its complexity in evolving non-trivial behaviours and also how adding an independent component to the fitness function could help the evolutionary process exploring a larger space solutions avoiding a premature convergence towards suboptimal solutions

    Robots that can adapt like animals

    Get PDF
    As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury
    corecore