160 research outputs found

    Objective versus Non-Objective Search in Evolving Morphologically Robust Robot Controllers

    Get PDF
    This study evaluates objective versus non-objective based evolutionary search methods for behavior evolution in robot teams. The goal is to evaluate the morphological robustness of evolved controllers, where controllers are evolved for specific robot sensory-motor configurations (morphologies) but must continue to function as these morphologies degrade. Robots use artificial neural network controllers where behavior evolution is directed by developmental neuro-evolution. Guiding evolutionary controller design we use objective (fitness function) versus nonobjective (novelty) search. The former optimizes for behavioral fitness and the latter for behavioral novelty. These methods are evaluated across varying robot morphologies and increasing task complexity. Results indicate that novelty search yields no benefits over objective search, in terms of evolving morphologically robust controllers. That is, both novelty and objective search evolve team controllers that are morphologically robust given varying robot morphologies and increasing task complexity. Results thus suggest behavioral diversity methods such as novelty search may not be suitable for generating robot behaviors that can continue functioning given changing robot morphologies, for example, due to damaged or disabled sensors and actuators

    Is Novelty Search Good for Evolving Morphologically Robust Robot Controllers?

    Get PDF
    This study evaluates comparative behavioral search methods for evolutionary controller design in robot teams, where the goal is to evaluate the morphological robustness of evolved controllers. That is, where controllers are evolved for specific robot sensory-motor configurations (morphologies) but must continue to function as these morphologies degrade. Robots use neural controllers where behavior evolution is directed by developmental Neuro-Evolution (HyperNEAT). Guiding evolutionary controller design we use objective (fitness function) versus non-objective (novelty) search. The former optimizes for behavioral fitness and the latter for behavioral novelty. These search methods are evaluated across varying robot morphologies and increasing task complexity. Results indicate that both novelty and objective search evolve team controllers (behaviors) that are morphologically robust given degrading robot morphologies and increasing task complexity. Results thus suggest that novelty search is not necessarily suitable for generating robot team behaviors that are robust to changes in robot morphologies (for example, due to damaged or disabled sensors and actuators)

    A comparison of controller architectures and learning mechanisms for arbitrary robot morphologies

    Full text link
    The main question this paper addresses is: What combination of a robot controller and a learning method should be used, if the morphology of the learning robot is not known in advance? Our interest is rooted in the context of morphologically evolving modular robots, but the question is also relevant in general, for system designers interested in widely applicable solutions. We perform an experimental comparison of three controller-and-learner combinations: one approach where controllers are based on modelling animal locomotion (Central Pattern Generators, CPG) and the learner is an evolutionary algorithm, a completely different method using Reinforcement Learning (RL) with a neural network controller architecture, and a combination `in-between' where controllers are neural networks and the learner is an evolutionary algorithm. We apply these three combinations to a test suite of modular robots and compare their efficacy, efficiency, and robustness. Surprisingly, the usual CPG-based and RL-based options are outperformed by the in-between combination that is more robust and efficient than the other two setups

    Evolving generalist controllers to handle a wide range of morphological variations

    Full text link
    Neuro-evolutionary methods have proven effective in addressing a wide range of tasks. However, the study of the robustness and generalisability of evolved artificial neural networks (ANNs) has remained limited. This has immense implications in the fields like robotics where such controllers are used in control tasks. Unexpected morphological or environmental changes during operation can risk failure if the ANN controllers are unable to handle these changes. This paper proposes an algorithm that aims to enhance the robustness and generalisability of the controllers. This is achieved by introducing morphological variations during the evolutionary process. As a results, it is possible to discover generalist controllers that can handle a wide range of morphological variations sufficiently without the need of the information regarding their morphologies or adaptation of their parameters. We perform an extensive experimental analysis on simulation that demonstrates the trade-off between specialist and generalist controllers. The results show that generalists are able to control a range of morphological variations with a cost of underperforming on a specific morphology relative to a specialist. This research contributes to the field by addressing the limited understanding of robustness and generalisability in neuro-evolutionary methods and proposes a method by which to improve these properties

    Lamarck's Revenge: Inheritance of Learned Traits Can Make Robot Evolution Better

    Full text link
    Evolutionary robot systems offer two principal advantages: an advanced way of developing robots through evolutionary optimization and a special research platform to conduct what-if experiments regarding questions about evolution. Our study sits at the intersection of these. We investigate the question ``What if the 18th-century biologist Lamarck was not completely wrong and individual traits learned during a lifetime could be passed on to offspring through inheritance?'' We research this issue through simulations with an evolutionary robot framework where morphologies (bodies) and controllers (brains) of robots are evolvable and robots also can improve their controllers through learning during their lifetime. Within this framework, we compare a Lamarckian system, where learned bits of the brain are inheritable, with a Darwinian system, where they are not. Analyzing simulations based on these systems, we obtain new insights about Lamarckian evolution dynamics and the interaction between evolution and learning. Specifically, we show that Lamarckism amplifies the emergence of `morphological intelligence', the ability of a given robot body to acquire a good brain by learning, and identify the source of this success: `newborn' robots have a higher fitness because their inherited brains match their bodies better than those in a Darwinian system.Comment: preprint-nature scientific report. arXiv admin note: text overlap with arXiv:2303.1259

    The Environment and Body-Brain Complexity

    Get PDF
    An open question for both natural and artificial evolutionary systems is how, and under what environmental and evolutionary conditions complexity evolves. This study investigates the impact of increasingly complex task environments on the evolution of robot complexity. Specifically, the impact of evolving body-brain couplings on locomotive task performance, where robot evolution was directed by either body-brain exploration (novelty search) or objective-based (fitness function) evolutionary search. Results indicated that novelty search enabled the evolution of increased robot body-brain complexity and efficacy given specific environment conditions. The key contribution is thus the demonstration that body-brain exploration is suitable for evolving robot complexity that enables high fitness robots in specific environments

    The Evolution of Complexity in Autonomous Robots

    Get PDF
    Evolutionary robotics–the use of evolutionary algorithms to automate the production of autonomous robots–has been an active area of research for two decades. However, previous work in this domain has been limited by the simplicity of the evolved robots and the task environments within which they are able to succeed. This dissertation aims to address these challenges by developing techniques for evolving more complex robots. Particular focus is given to methods which evolve not only the control policies of manually-designed robots, but instead evolve both the control policy and physical form of the robot. These techniques are presented along with their application to investigating previously unexplored relationships between the complexity of evolving robots and the task environments within which they evolve

    Engineering evolutionary control for real-world robotic systems

    Get PDF
    Evolutionary Robotics (ER) is the field of study concerned with the application of evolutionary computation to the design of robotic systems. Two main issues have prevented ER from being applied to real-world tasks, namely scaling to complex tasks and the transfer of control to real-robot systems. Finding solutions to complex tasks is challenging for evolutionary approaches due to the bootstrap problem and deception. When the task goal is too difficult, the evolutionary process will drift in regions of the search space with equally low levels of performance and therefore fail to bootstrap. Furthermore, the search space tends to get rugged (deceptive) as task complexity increases, which can lead to premature convergence. Another prominent issue in ER is the reality gap. Behavioral control is typically evolved in simulation and then only transferred to the real robotic hardware when a good solution has been found. Since simulation is an abstraction of the real world, the accuracy of the robot model and its interactions with the environment is limited. As a result, control evolved in a simulator tends to display a lower performance in reality than in simulation. In this thesis, we present a hierarchical control synthesis approach that enables the use of ER techniques for complex tasks in real robotic hardware by mitigating the bootstrap problem, deception, and the reality gap. We recursively decompose a task into sub-tasks, and synthesize control for each sub-task. The individual behaviors are then composed hierarchically. The possibility of incrementally transferring control as the controller is composed allows transferability issues to be addressed locally in the controller hierarchy. Our approach features hybridity, allowing different control synthesis techniques to be combined. We demonstrate our approach in a series of tasks that go beyond the complexity of tasks where ER has been successfully applied. We further show that hierarchical control can be applied in single-robot systems and in multirobot systems. Given our long-term goal of enabling the application of ER techniques to real-world tasks, we systematically validate our approach in real robotic hardware. For one of the demonstrations in this thesis, we have designed and built a swarm robotic platform, and we show the first successful transfer of evolved and hierarchical control to a swarm of robots outside of controlled laboratory conditions.A Robótica Evolutiva (RE) é a área de investigação que estuda a aplicação de computação evolutiva na conceção de sistemas robóticos. Dois principais desafios têm impedido a aplicação da RE em tarefas do mundo real: a dificuldade em solucionar tarefas complexas e a transferência de controladores evoluídos para sistemas robóticos reais. Encontrar soluções para tarefas complexas é desafiante para as técnicas evolutivas devido ao bootstrap problem e à deception. Quando o objetivo é demasiado difícil, o processo evolutivo tende a permanecer em regiões do espaço de procura com níveis de desempenho igualmente baixos, e consequentemente não consegue inicializar. Por outro lado, o espaço de procura tende a enrugar à medida que a complexidade da tarefa aumenta, o que pode resultar numa convergência prematura. Outro desafio na RE é a reality gap. O controlo robótico é tipicamente evoluído em simulação, e só é transferido para o sistema robótico real quando uma boa solução tiver sido encontrada. Como a simulação é uma abstração da realidade, a precisão do modelo do robô e das suas interações com o ambiente é limitada, podendo resultar em controladores com um menor desempenho no mundo real. Nesta tese, apresentamos uma abordagem de síntese de controlo hierárquica que permite o uso de técnicas de RE em tarefas complexas com hardware robótico real, mitigando o bootstrap problem, a deception e a reality gap. Decompomos recursivamente uma tarefa em sub-tarefas, e sintetizamos controlo para cada subtarefa. Os comportamentos individuais são então compostos hierarquicamente. A possibilidade de transferir o controlo incrementalmente à medida que o controlador é composto permite que problemas de transferibilidade possam ser endereçados localmente na hierarquia do controlador. A nossa abordagem permite o uso de diferentes técnicas de síntese de controlo, resultando em controladores híbridos. Demonstramos a nossa abordagem em várias tarefas que vão para além da complexidade das tarefas onde a RE foi aplicada. Também mostramos que o controlo hierárquico pode ser aplicado em sistemas de um robô ou sistemas multirobô. Dado o nosso objetivo de longo prazo de permitir o uso de técnicas de RE em tarefas no mundo real, concebemos e desenvolvemos uma plataforma de robótica de enxame, e mostramos a primeira transferência de controlo evoluído e hierárquico para um exame de robôs fora de condições controladas de laboratório.This work has been supported by the Portuguese Foundation for Science and Technology (Fundação para a Ciência e Tecnologia) under the grants SFRH/BD/76438/2011, EXPL/EEI-AUT/0329/2013, and by Instituto de Telecomunicações under the grant UID/EEA/50008/2013

    Evolving Gaits for Damage Control in a Hexapod Robot

    Get PDF
    Autonomous robots are increasingly used in remote and hazardous environments, where damage to sensory-actuator systems cannot be easily repaired. Such robots must therefore have controllers that continue to function effectively given unexpected malfunctions and damage to robot morphology. This study applies the Intelligent Trial and Error (IT&E) algorithm to adapt hexapod robot control to various leg failures and demonstrates the IT&E map-size parameter as a critical parameter in influencing IT&E adaptive task performance. We evaluate robot adaptation for multiple leg failures on two different map-sizes in simulation and validate evolved controllers on a physical hexapod robot. Results demonstrate a trade-off between adapted gait speed and adaptation duration, dependent on adaptation task complexity (leg damage incurred), where map-size is crucial for generating behavioural diversity required for adaptation
    corecore