3,605 research outputs found
Embodied Evolution in Collective Robotics: A Review
This paper provides an overview of evolutionary robotics techniques applied
to on-line distributed evolution for robot collectives -- namely, embodied
evolution. It provides a definition of embodied evolution as well as a thorough
description of the underlying concepts and mechanisms. The paper also presents
a comprehensive summary of research published in the field since its inception
(1999-2017), providing various perspectives to identify the major trends. In
particular, we identify a shift from considering embodied evolution as a
parallel search method within small robot collectives (fewer than 10 robots) to
embodied evolution as an on-line distributed learning method for designing
collective behaviours in swarm-like collectives. The paper concludes with a
discussion of applications and open questions, providing a milestone for past
and an inspiration for future research.Comment: 23 pages, 1 figure, 1 tabl
Comparison of Selection Methods in On-line Distributed Evolutionary Robotics
In this paper, we study the impact of selection methods in the context of
on-line on-board distributed evolutionary algorithms. We propose a variant of
the mEDEA algorithm in which we add a selection operator, and we apply it in a
taskdriven scenario. We evaluate four selection methods that induce different
intensity of selection pressure in a multi-robot navigation with obstacle
avoidance task and a collective foraging task. Experiments show that a small
intensity of selection pressure is sufficient to rapidly obtain good
performances on the tasks at hand. We introduce different measures to compare
the selection methods, and show that the higher the selection pressure, the
better the performances obtained, especially for the more challenging food
foraging task
Open-Ended Evolutionary Robotics: an Information Theoretic Approach
This paper is concerned with designing self-driven fitness functions for
Embedded Evolutionary Robotics. The proposed approach considers the entropy of
the sensori-motor stream generated by the robot controller. This entropy is
computed using unsupervised learning; its maximization, achieved by an on-board
evolutionary algorithm, implements a "curiosity instinct", favouring
controllers visiting many diverse sensori-motor states (sms). Further, the set
of sms discovered by an individual can be transmitted to its offspring, making
a cultural evolution mode possible. Cumulative entropy (computed from ancestors
and current individual visits to the sms) defines another self-driven fitness;
its optimization implements a "discovery instinct", as it favours controllers
visiting new or rare sensori-motor states. Empirical results on the benchmark
problems proposed by Lehman and Stanley (2008) comparatively demonstrate the
merits of the approach
Evolvability signatures of generative encodings: beyond standard performance benchmarks
Evolutionary robotics is a promising approach to autonomously synthesize
machines with abilities that resemble those of animals, but the field suffers
from a lack of strong foundations. In particular, evolutionary systems are
currently assessed solely by the fitness score their evolved artifacts can
achieve for a specific task, whereas such fitness-based comparisons provide
limited insights about how the same system would evaluate on different tasks,
and its adaptive capabilities to respond to changes in fitness (e.g., from
damages to the machine, or in new situations). To counter these limitations, we
introduce the concept of "evolvability signatures", which picture the
post-mutation statistical distribution of both behavior diversity (how
different are the robot behaviors after a mutation?) and fitness values (how
different is the fitness after a mutation?). We tested the relevance of this
concept by evolving controllers for hexapod robot locomotion using five
different genotype-to-phenotype mappings (direct encoding, generative encoding
of open-loop and closed-loop central pattern generators, generative encoding of
neural networks, and single-unit pattern generators (SUPG)). We observed a
predictive relationship between the evolvability signature of each encoding and
the number of generations required by hexapods to adapt from incurred damages.
Our study also reveals that, across the five investigated encodings, the SUPG
scheme achieved the best evolvability signature, and was always foremost in
recovering an effective gait following robot damages. Overall, our evolvability
signatures neatly complement existing task-performance benchmarks, and pave the
way for stronger foundations for research in evolutionary robotics.Comment: 24 pages with 12 figures in the main text, and 4 supplementary
figures. Accepted at Information Sciences journal (in press). Supplemental
videos are available online at, see http://goo.gl/uyY1R
Improving the adaptability of simulated evolutionary swarm robots in dynamically changing environments
One of the important challenges in the field of evolutionary robotics is the development of systems that can adapt to a changing environment. However, the ability to adapt to unknown and fluctuating environments is not straightforward. Here, we explore the adaptive potential of simulated swarm robots that contain a genomic encoding of a bio-inspired gene regulatory network (GRN). An artificial genome is combined with a flexible agent-based system, representing the activated part of the regulatory network that transduces environmental cues into phenotypic behaviour. Using an artificial life simulation framework that mimics a dynamically changing environment, we show that separating the static from the conditionally active part of the network contributes to a better adaptive behaviour. Furthermore, in contrast with most hitherto developed ANN-based systems that need to re-optimize their complete controller network from scratch each time they are subjected to novel conditions, our system uses its genome to store GRNs whose performance was optimized under a particular environmental condition for a sufficiently long time. When subjected to a new environment, the previous condition-specific GRN might become inactivated, but remains present. This ability to store 'good behaviour' and to disconnect it from the novel rewiring that is essential under a new condition allows faster re-adaptation if any of the previously observed environmental conditions is reencountered. As we show here, applying these evolutionary-based principles leads to accelerated and improved adaptive evolution in a non-stable environment
Combating catastrophic forgetting with developmental compression
Generally intelligent agents exhibit successful behavior across problems in
several settings. Endemic in approaches to realize such intelligence in
machines is catastrophic forgetting: sequential learning corrupts knowledge
obtained earlier in the sequence, or tasks antagonistically compete for system
resources. Methods for obviating catastrophic forgetting have sought to
identify and preserve features of the system necessary to solve one problem
when learning to solve another, or to enforce modularity such that minimally
overlapping sub-functions contain task specific knowledge. While successful,
both approaches scale poorly because they require larger architectures as the
number of training instances grows, causing different parts of the system to
specialize for separate subsets of the data. Here we present a method for
addressing catastrophic forgetting called developmental compression. It
exploits the mild impacts of developmental mutations to lessen adverse changes
to previously-evolved capabilities and `compresses' specialized neural networks
into a generalized one. In the absence of domain knowledge, developmental
compression produces systems that avoid overt specialization, alleviating the
need to engineer a bespoke system for every task permutation and suggesting
better scalability than existing approaches. We validate this method on a robot
control problem and hope to extend this approach to other machine learning
domains in the future
The distributed co-evolution of an on-board simulator and controller for swarm robot behaviours
We investigate the reality gap, specifically the environmental correspondence of an on-board simulator. We describe a novel distributed co-evolutionary approach to improve the transference of controllers that co-evolve with an on-board simulator. A novelty of our approach is the the potential to improve transference between simulation and reality without an explicit measurement between the two domains. We hypothesise that a variation of on-board simulator environment models across many robots can be competitively exploited by comparison of the real controller fitness of many robots. We hypothesise that the real controller fitness values across many robots can be taken as indicative of the varied fitness in environmental correspondence of on-board simulators, and used to inform the distributed evolution an on-board simulator environment model without explicit measurement of the real environment. Our results demonstrate that our approach creates an adaptive relationship between the on-board simulator environment model, the real world behaviour of the robots, and the state of the real environment. The results indicate that our approach is sensitive to whether the real behavioural performance of the robot is informative on the state real environment. © 2014 Springer-Verlag Berlin Heidelberg
- …