14 research outputs found

    Comparison of Selection Methods in On-line Distributed Evolutionary Robotics

    Get PDF
    In this paper, we study the impact of selection methods in the context of on-line on-board distributed evolutionary algorithms. We propose a variant of the mEDEA algorithm in which we add a selection operator, and we apply it in a taskdriven scenario. We evaluate four selection methods that induce different intensity of selection pressure in a multi-robot navigation with obstacle avoidance task and a collective foraging task. Experiments show that a small intensity of selection pressure is sufficient to rapidly obtain good performances on the tasks at hand. We introduce different measures to compare the selection methods, and show that the higher the selection pressure, the better the performances obtained, especially for the more challenging food foraging task

    Embodied Evolution in Collective Robotics: A Review

    Get PDF
    This paper provides an overview of evolutionary robotics techniques applied to on-line distributed evolution for robot collectives -- namely, embodied evolution. It provides a definition of embodied evolution as well as a thorough description of the underlying concepts and mechanisms. The paper also presents a comprehensive summary of research published in the field since its inception (1999-2017), providing various perspectives to identify the major trends. In particular, we identify a shift from considering embodied evolution as a parallel search method within small robot collectives (fewer than 10 robots) to embodied evolution as an on-line distributed learning method for designing collective behaviours in swarm-like collectives. The paper concludes with a discussion of applications and open questions, providing a milestone for past and an inspiration for future research.Comment: 23 pages, 1 figure, 1 tabl

    Learning Collaborative Foraging in a Swarm of Robots using Embodied Evolution

    Get PDF
    International audienceIn this paper, we study how a swarm of robots adapts over time to solve a collaborative task using a distributed Embodied Evolutionary approach , where each robot runs an evolutionary algorithm and they locally exchange genomes and fitness values. Particularly, we study a collabo-rative foraging task, where the robots are rewarded for collecting food items that are too heavy to be collected individually and need at least two robots to be collected. Further, the robots also need to display a signal matching the color of the item with an additional effector. Our experiments show that the distributed algorithm is able to evolve swarm behavior to collect items cooperatively. The experiments also reveal that effective cooperation is evolved due mostly to the ability of robots to jointly reach food items, while learning to display the right color that matches the item is done suboptimally. However, a closer analysis shows that, without a mechanism to avoid neglecting any kind of item, robots collect all of them, which means that there is some degree of learning to choose the right value for the color effector depending on the situation

    Influence of Selection Pressure in Online, Distributed Evolutionary Robotics

    Get PDF
    National audienceThe effect of selection pressure on evolution in centralized evolutionary algorithms (EA’s) is relatively well understood. Selection pressure pushes evolution toward better performing individuals. However, distributed EA’s in an Evolutionary Robotics (ER) context differ in that the population is distributed across the agents, and a global vision of all the individuals is not available. In this paper, we analyze the influence of selection pressure in such a distributed context. We propose a version of mEDEA that adds a selection pressure, and evaluate its effect on two multi-robot tasks: navigation and obstacle avoidance, and collective foraging. Experiments show that even small intensities of selection pressure lead to good performances, and that performance increases with selection pressure. This is opposed to the lower selection pressure that is usually preferred in centralized approaches to avoid stagnating in local optima

    When Mating Improves On-line Collective Robotics

    Get PDF
    International audienc

    Learning Collaborative Foraging in a Swarm of Robots using Embodied Evolution

    Get PDF
    International audienceIn this paper, we study how a swarm of robots adapts over time to solve a collaborative task using a distributed Embodied Evolutionary approach , where each robot runs an evolutionary algorithm and they locally exchange genomes and fitness values. Particularly, we study a collabo-rative foraging task, where the robots are rewarded for collecting food items that are too heavy to be collected individually and need at least two robots to be collected. Further, the robots also need to display a signal matching the color of the item with an additional effector. Our experiments show that the distributed algorithm is able to evolve swarm behavior to collect items cooperatively. The experiments also reveal that effective cooperation is evolved due mostly to the ability of robots to jointly reach food items, while learning to display the right color that matches the item is done suboptimally. However, a closer analysis shows that, without a mechanism to avoid neglecting any kind of item, robots collect all of them, which means that there is some degree of learning to choose the right value for the color effector depending on the situation

    Seeking Specialization Through Novelty in Distributed Online Collective Robotics

    Get PDF
    International audienceOnline Embodied Evolution is a distributed learning method for collective heterogeneous robotic swarms, in which evolution is carried out in a decentralized manner. In this work, we address the problem of promoting reproductive isolation, a feature that has been identified as crucial in situations where behavioral specialization is desired. We hypothesize that one way to allow a swarm of robots to specialize on different tasks is through the promotion of diversity. Our contribution is twofold, we describe a method that allows a swarm of heterogeneous agents evolving online to maintain a high degree of diversity in behavioral space in which selection is based on originality. We also introduce a behavioral distance measure that compares behaviors in the same conditions to provide reliable measurements in online distributed situations. We test the hypothesis on a concurrent foraging task and the experiments show that diversity is indeed preserved and, that different behaviors emerge in the swarm; suggesting the emergence of reproductive isolation. Finally, we employ different analysis tools from computational biology that further support this claim

    odNEAT: an algorithm for decentralised online evolution of robotic controllers

    Get PDF
    Online evolution gives robots the capacity to learn new tasks and to adapt to changing environmental conditions during task execution. Previous approaches to online evolution of neural controllers are typically limited to the optimisation of weights in networks with a prespecified, fixed topology. In this article, we propose a novel approach to online learning in groups of autonomous robots called odNEAT. odNEAT is a distributed and decentralised neuroevolution algorithm that evolves both weights and network topology. We demonstrate odNEAT in three multirobot tasks: aggregation, integrated navigation and obstacle avoidance, and phototaxis. Results show that odNEAT approximates the performance of rtNEAT, an efficient centralised method, and outperforms IM-( mu + 1), a decentralised neuroevolution algorithm. Compared with rtNEAT and IM( mu + 1), odNEAT's evolutionary dynamics lead to the synthesis of less complex neural controllers with superior generalisation capabilities. We show that robots executing odNEAT can display a high degree of fault tolerance as they are able to adapt and learn new behaviours in the presence of faults. We conclude with a series of ablation studies to analyse the impact of each algorithmic component on performance.info:eu-repo/semantics/submittedVersio

    Meccanismi evolutivi per la progettazione automatica online di reti booleane per sciami di robot

    Get PDF
    Negli approcci di robotica evolutiva online, un algoritmo evolutivo viene eseguito sui robot, durante l’esecuzione delle attività, per ottimizzarne continuamente il comportamento. La motivazione principale alla base dell'uso di sistemi multi-robot è stata quella di sfruttare la potenziale accelerazione dell'evoluzione dovuta a robot che evolvono i controller in parallelo e che scambiano soluzioni candidate per l’attività. In questa tesi abbiamo implementato e analizzato tre meccanismi evolutivi di progettazione automatica online, in grado di permettere ad uno sciame di robot di imparare autonomamente a svolgere determinati task, attraverso l’utilizzo di controller basati su Random Boolean Network. Un meccanismo guiderà i robot durante una evoluzione indipendente, senza scambio di informazioni, mentre i restati due permetteranno allo sciame, con tecniche differenti, di effettuare una sincronizzazione globale della migliore rete trovata fino ad un determinato momento. I risultati ottenuti hanno mostrato che utilizzare una sincronizzazione globale è un approccio molto promettente per l'evoluzione di sciami di robot e che i meccanismi che ne fanno uso sono in grado di generare reti Booleane ottimali per task di coverage differenti

    Online evolution of robot behaviour

    Get PDF
    Tese de mestrado em Engenharia Informática (Interação e Conhecimento), apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2012In this dissertation, we propose and evaluate two novel approaches to the online synthesis of neural controllers for autonomous robots. The first approach is odNEAT, an online, distributed, and decentralized version of NeuroEvolution of Augmenting Topologies (NEAT). odNEAT is an algorithm for online evolution in groups of embodied agents such as robots. In odNEAT, agents have to solve the same task, either individually or collectively. While previous approaches to online evolution of neural controllers have been limited to the optimization of weights, odNEAT evolves both weights and network topology. We demonstrate odNEAT through a series of simulation-based experiments in which a group of e-puck-like robots must perform an aggregation task. Our results show that robots are capable of evolving effective aggregation strategies and that sustainable behaviours evolve quickly. We show that odNEAT approximates the performance of rtNEAT, a similar but centralized method. We also analyze the contribution of each algorithmic component on the performance through a series of ablation studies. In the second approach, we extend our previous method and combine online evolution of weights and network topology (odNEAT) with neuromodulated learning. We demonstrate our method through a series of experiments in which a group of simulated robots must perform a dynamic concurrent foraging task. In this task, scattered food items periodically change their nutritive value or become poisonous. Our results show that when neuromodulated learning is employed, neural controllers are synthesized faster than by odNEAT alone. We demonstrate that the online evolutionary process is capable of generating controllers that adapt to the periodic task changes. We evaluate the performance both in a single robot setup and in a multirobot setup. An analysis of the evolved networks shows that they are characterized by specialized modulatory neurons that exclusively regulate online learning in the output neurons
    corecore