1,057 research outputs found

    Evolution of hybrid robotic controllers for complex tasks

    Get PDF
    We propose an approach to the synthesis of hierarchical control systems comprising both evolved and manually programmed control for autonomous robots. We recursively divide the goal task into sub-tasks until a solution can be evolved or until a solution can easily be programmed by hand. Hierarchical composition of behavior allows us to overcome the fundamental challenges that typically prevent evolutionary robotics from being applied to complex tasks: bootstrapping the evolutionary process, avoiding deception, and successfully transferring control evolved in simulation to real robotic hardware. We demonstrate the proposed approach by synthesizing control systems for two tasks whose complexity is beyond state of the art in evolutionary robotics. The first task is a rescue task in which all behaviors are evolved. The second task is a cleaning task in which evolved behaviors are combined with a manually programmed behavior that enables the robot to open doors in the environment. We demonstrate incremental transfer of evolved control from simulation to real robotic hardware, and we show how our approach allows for the reuse of behaviors in different tasks.info:eu-repo/semantics/acceptedVersio

    Hierarchical evolution of robotic controllers for complex tasks

    Get PDF
    A robótica evolucionária é uma metodologia que permite que robôs aprendam a efetuar uma tarefa através da afinação automática dos seus “cérebros” (controladores). Apesar do processo evolutivo ser das formas de aprendizagem mais radicais e abertas, a sua aplicação a tarefas de maior complexidade comportamental não é fácil. Visto que os controladores são habitualmente evoluídos através de simulação computacional, é incontornável que existam diferenças entre os sensores e atuadores reais e as suas versões simuladas. Estas diferenças impedem que os controladores evoluídos alcancem um desempenho em robôs reais equivalente ao da simulação. Nesta dissertação propomos uma abordagem para ultrapassar tanto o problema da complexidade comportamental como o problema da transferência para a realidade. Mostramos como um controlador pode ser evoluído para uma tarefa complexa através da evolução hierárquica de comportamentos. Experimentamos também combinar técnicas evolucionárias com comportamentos pré-programados. Demonstramos a nossa abordagem numa tarefa em que um robô tem que encontrar e salvar um colega. O robô começa numa sala com obstáculos e o colega está localizado num labirinto ligado à sala. Dividimos a tarefa de salvamento em diferentes sub-tarefas, evoluímos controladores para cada sub-tarefa, e combinamos os controladores resultantes através de evoluções adicionais. Testamos os controladores em simulação e comparamos o desempenho num robô real. O controlador alcançou uma taxa de sucesso superior a 90% tanto na simulação como na realidade. As contribuições principais do nosso estudo são a introdução de uma metodologia inovadora para a evolução de controladores para tarefas complexas, bem como a sua demonstração num robô real.Evolutionary robotics is a methodology that allows for robots to learn how perform a task by automatically fine-tuning their “brain” (controller). Evolution is one of the most radical and open-ended forms of learning, but it has proven difficult for tasks where complex behavior is necessary (know as the bootstrapping problem). Controllers are usually evolved through computer simulation, and differences between real sensors and actuators and their simulated implementations are unavoidable. These differences prevent evolved controllers from crossing the reality gap, that is, achieving similar performance in real robotic hardware as they do in simulation. In this dissertation, we propose an approach to overcome both the bootstrapping problem and the reality gap. We demonstrate how a controller can be evolved for a complex task through hierarchical evolution of behaviors. We further experiment with combining evolutionary techniques and preprogrammed behaviors. We demonstrate our approach in a task in which a robot has to find and rescue a teammate. The robot starts in a room with obstacles and the teammate is located in a double T-maze connected to the room. We divide the rescue task into different sub-tasks, evolve controllers for each sub-task, and then combine the resulting controllers in a bottom-up fashion through additional evolutionary runs. The controller achieved a task completion rate of more than 90% both in simulation and on real robotic hardware. The main contributions of our study are the introduction of a novel methodology for evolving controllers for complex tasks, and its demonstration on real robotic hardware

    Engineering evolutionary control for real-world robotic systems

    Get PDF
    Evolutionary Robotics (ER) is the field of study concerned with the application of evolutionary computation to the design of robotic systems. Two main issues have prevented ER from being applied to real-world tasks, namely scaling to complex tasks and the transfer of control to real-robot systems. Finding solutions to complex tasks is challenging for evolutionary approaches due to the bootstrap problem and deception. When the task goal is too difficult, the evolutionary process will drift in regions of the search space with equally low levels of performance and therefore fail to bootstrap. Furthermore, the search space tends to get rugged (deceptive) as task complexity increases, which can lead to premature convergence. Another prominent issue in ER is the reality gap. Behavioral control is typically evolved in simulation and then only transferred to the real robotic hardware when a good solution has been found. Since simulation is an abstraction of the real world, the accuracy of the robot model and its interactions with the environment is limited. As a result, control evolved in a simulator tends to display a lower performance in reality than in simulation. In this thesis, we present a hierarchical control synthesis approach that enables the use of ER techniques for complex tasks in real robotic hardware by mitigating the bootstrap problem, deception, and the reality gap. We recursively decompose a task into sub-tasks, and synthesize control for each sub-task. The individual behaviors are then composed hierarchically. The possibility of incrementally transferring control as the controller is composed allows transferability issues to be addressed locally in the controller hierarchy. Our approach features hybridity, allowing different control synthesis techniques to be combined. We demonstrate our approach in a series of tasks that go beyond the complexity of tasks where ER has been successfully applied. We further show that hierarchical control can be applied in single-robot systems and in multirobot systems. Given our long-term goal of enabling the application of ER techniques to real-world tasks, we systematically validate our approach in real robotic hardware. For one of the demonstrations in this thesis, we have designed and built a swarm robotic platform, and we show the first successful transfer of evolved and hierarchical control to a swarm of robots outside of controlled laboratory conditions.A Robótica Evolutiva (RE) é a área de investigação que estuda a aplicação de computação evolutiva na conceção de sistemas robóticos. Dois principais desafios têm impedido a aplicação da RE em tarefas do mundo real: a dificuldade em solucionar tarefas complexas e a transferência de controladores evoluídos para sistemas robóticos reais. Encontrar soluções para tarefas complexas é desafiante para as técnicas evolutivas devido ao bootstrap problem e à deception. Quando o objetivo é demasiado difícil, o processo evolutivo tende a permanecer em regiões do espaço de procura com níveis de desempenho igualmente baixos, e consequentemente não consegue inicializar. Por outro lado, o espaço de procura tende a enrugar à medida que a complexidade da tarefa aumenta, o que pode resultar numa convergência prematura. Outro desafio na RE é a reality gap. O controlo robótico é tipicamente evoluído em simulação, e só é transferido para o sistema robótico real quando uma boa solução tiver sido encontrada. Como a simulação é uma abstração da realidade, a precisão do modelo do robô e das suas interações com o ambiente é limitada, podendo resultar em controladores com um menor desempenho no mundo real. Nesta tese, apresentamos uma abordagem de síntese de controlo hierárquica que permite o uso de técnicas de RE em tarefas complexas com hardware robótico real, mitigando o bootstrap problem, a deception e a reality gap. Decompomos recursivamente uma tarefa em sub-tarefas, e sintetizamos controlo para cada subtarefa. Os comportamentos individuais são então compostos hierarquicamente. A possibilidade de transferir o controlo incrementalmente à medida que o controlador é composto permite que problemas de transferibilidade possam ser endereçados localmente na hierarquia do controlador. A nossa abordagem permite o uso de diferentes técnicas de síntese de controlo, resultando em controladores híbridos. Demonstramos a nossa abordagem em várias tarefas que vão para além da complexidade das tarefas onde a RE foi aplicada. Também mostramos que o controlo hierárquico pode ser aplicado em sistemas de um robô ou sistemas multirobô. Dado o nosso objetivo de longo prazo de permitir o uso de técnicas de RE em tarefas no mundo real, concebemos e desenvolvemos uma plataforma de robótica de enxame, e mostramos a primeira transferência de controlo evoluído e hierárquico para um exame de robôs fora de condições controladas de laboratório.This work has been supported by the Portuguese Foundation for Science and Technology (Fundação para a Ciência e Tecnologia) under the grants SFRH/BD/76438/2011, EXPL/EEI-AUT/0329/2013, and by Instituto de Telecomunicações under the grant UID/EEA/50008/2013

    Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents

    Full text link
    Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e.g. hours vs. days) because they parallelize better. However, many RL problems require directed exploration because they have reward functions that are sparse or deceptive (i.e. contain local optima), and it is unknown how to encourage such exploration with ES. Here we show that algorithms that have been invented to promote directed exploration in small-scale evolved neural networks via populations of exploring agents, specifically novelty search (NS) and quality diversity (QD) algorithms, can be hybridized with ES to improve its performance on sparse or deceptive deep RL tasks, while retaining scalability. Our experiments confirm that the resultant new algorithms, NS-ES and two QD algorithms, NSR-ES and NSRA-ES, avoid local optima encountered by ES to achieve higher performance on Atari and simulated robots learning to walk around a deceptive trap. This paper thus introduces a family of fast, scalable algorithms for reinforcement learning that are capable of directed exploration. It also adds this new family of exploration algorithms to the RL toolbox and raises the interesting possibility that analogous algorithms with multiple simultaneous paths of exploration might also combine well with existing RL algorithms outside ES

    An experimental study on evolutionary reactive behaviors for mobile robots navigation

    Get PDF
    Mobile robot's navigation and obstacle avoidance in an unknown and static environment is analyzed in this paper. From the guidance of position sensors, artificial neural network (ANN) based controllers settle the desired trajectory between current and a target point. Evolutionary algorithms were used to choose the best controller. This approach, known as Evolutionary Robotics (ER), commonly resorts to very simple ANN architectures. Although they include temporal processing, most of them do not consider the learned experience in the controller's evolution. Thus, the ER research presented in this article, focuses on the specification and testing of the ANN based controllers implemented when genetic mutations are performed from one generation to another. Discrete-Time Recurrent Neural Networks based controllers were tested, with two variants: plastic neural networks (PNN) and standard feedforward (FFNN) networks. Also the way in which evolution was performed was also analyzed. As a result, controlled mutation do not exhibit major advantages against over the non controlled one, showing that diversity is more powerful than controlled adaptation.Facultad de Informátic

    An experimental study on evolutionary reactive behaviors for mobile robots navigation

    Get PDF
    Mobile robot's navigation and obstacle avoidance in an unknown and static environment is analyzed in this paper. From the guidance of position sensors, artificial neural network (ANN) based controllers settle the desired trajectory between current and a target point. Evolutionary algorithms were used to choose the best controller. This approach, known as Evolutionary Robotics (ER), commonly resorts to very simple ANN architectures. Although they include temporal processing, most of them do not consider the learned experience in the controller's evolution. Thus, the ER research presented in this article, focuses on the specification and testing of the ANN based controllers implemented when genetic mutations are performed from one generation to another. Discrete-Time Recurrent Neural Networks based controllers were tested, with two variants: plastic neural networks (PNN) and standard feedforward (FFNN) networks. Also the way in which evolution was performed was also analyzed. As a result, controlled mutation do not exhibit major advantages against over the non controlled one, showing that diversity is more powerful than controlled adaptation.Facultad de Informátic

    Automated Maze Robot

    Get PDF
    An autonomous maze robot is a robot that can solve a linear maze autonomously. The aim of this project is to develop an autonomous robot for navigation in an unknown maze enviromnent. The problems in this project are the problems in autonomous robot navigation and the problems in navigating in an unknown maze enviromnent. The methodology used in this project is the prototyping methodology. The findings in this project are the maze that is being used, the linear maze; the robot that is being used, the Pololu 3pi Robot; the algorithm that has been chosen to be implemented, the Wall Following Algorithm; the implementation of the project, how the robot operates; the tests that had been ran, the Fault Injection Test, the NonFunctional Test and the Integration Test; and the test results, the success rate and the failure rate. At the end of this report, the author concluded the project and explains what can be done for expansion and continuation

    Fusing novelty and surprise for evolving robot morphologies

    Get PDF
    Traditional evolutionary algorithms tend to converge to a single good solution, which can limit their chance of discovering more diverse and creative outcomes. Divergent search, on the other hand, aims to counter convergence to local optima by avoiding selection pressure towards the objective. Forms of divergent search such as novelty or surprise search have proven to be beneficial for both the efficiency and the variety of the solutions obtained in deceptive tasks. Importantly for this paper, early results in maze navigation have shown that combining novelty and surprise search yields an even more effective search strategy due to their orthogonal nature. Motivated by the largely unexplored potential of coupling novelty and surprise as a search strategy, in this paper we investigate how fusing the two can affect the evolution of soft robot morphologies. We test the capacity of the combined search strategy against objective, novelty, and surprise search, by comparing their efficiency and robustness, and the variety of robots they evolve. Our key results demonstrate that novelty-surprise search is generally more efficient and robust across eight different resolutions. Further, surprise search explores the space of robot morphologies more broadly than any other algorithm examined.peer-reviewe
    corecore