7 research outputs found

    Evolving a Behavioral Repertoire for a Walking Robot

    Full text link
    Numerous algorithms have been proposed to allow legged robots to learn to walk. However, the vast majority of these algorithms is devised to learn to walk in a straight line, which is not sufficient to accomplish any real-world mission. Here we introduce the Transferability-based Behavioral Repertoire Evolution algorithm (TBR-Evolution), a novel evolutionary algorithm that simultaneously discovers several hundreds of simple walking controllers, one for each possible direction. By taking advantage of solutions that are usually discarded by evolutionary processes, TBR-Evolution is substantially faster than independently evolving each controller. Our technique relies on two methods: (1) novelty search with local competition, which searches for both high-performing and diverse solutions, and (2) the transferability approach, which com-bines simulations and real tests to evolve controllers for a physical robot. We evaluate this new technique on a hexapod robot. Results show that with only a few dozen short experiments performed on the robot, the algorithm learns a repertoire of con-trollers that allows the robot to reach every point in its reachable space. Overall, TBR-Evolution opens a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot.Comment: 33 pages; Evolutionary Computation Journal 201

    Evolving a Behavioral Repertoire for a Walking Robot

    Get PDF
    International audienceNumerous algorithms have been proposed to allow legged robots to learn to walk. However, the vast majority of these algorithms is devised to learn to walk in a straight line, which is not sufficient to accomplish any real-world mission. Here we introduce the Transferability-based Behavioral Repertoire Evolution algorithm (TBR-Evolution), a novel evolutionary algorithm that simultaneously discovers several hundreds of sim-ple walking controllers, one for each possible direction. By taking advantage of so-lutions that are usually discarded by evolutionary processes, TBR-Evolution is sub-stantially faster than independently evolving each controller. Our technique relies on two methods: (1) novelty search with local competition, which searches for both high-performing and diverse solutions, and (2) the transferability approach, which com-bines simulations and real tests to evolve controllers for a physical robot. We evaluate this new technique on a hexapod robot. Results show that with only a few dozen short experiments performed on the robot, the algorithm learns a repertoire of con-trollers that allows the robot to reach every point in its reachable space. Overall, TBR-Evolution opens a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot

    Incremental Evolution of Target-Following Neuro-controllers for Flapping-Wing Animats

    No full text
    International audienceUsing an incremental multi-objective evolutionary algorithm and the ModNet encoding, we generated working neuro-controllers for target-following behavior in a simulated flapping-wing animat. To this end, we evolved tail con- trollers that were combined with two closed-loop wing-beat controllers previously generated, and able to secure straight flight at constant altitude and speed. The corresponding results demonstrate that a wing-beat strategy that consists in continuously adapting the twist of the external wing panel leads to better maneuvering capabilities than another strategy that adapts the beating amplitude. Such differences suggest that further improvements in flying control should bet- ter rely on some sort of automatic incremental evolution procedure than on any hand-designed decomposition of the problem

    Incremental Evolution of Target-Following Neuro-controllers for Flapping-Wing Animats

    No full text
    Abstract. Using an incremental multi-objective evolutionary algorithm and the ModNet encoding, we generated working neuro-controllers for target-following behavior in a simulated flapping-wing animat. To this end, we evolved tail controllers that were combined with two closed-loop wing-beat controllers previously generated, and able to secure straight flight at constant altitude and speed. The corresponding results demonstrate that a wing-beat strategy that consists in continuously adapting the twist of the external wing panel leads to better manoeuvring capabilities than another strategy that adapts the beating amplitude. Such differences suggest that further improvements in flying control should better rely on some sort of automatic incremental evolution procedure than on any hand-designed decomposition of the problem.

    Resilient Opportunistic On-line Global Optimization

    Get PDF
    Abstract Traditional off-line global optimization is non-resilient and non-opportunistic. That is, traditional global optimization is unresponsive to small perturbations of the objective function that require a small or large change in the optimizer. On-line optimization methods that are more resilient and opportunistic than their off-line counterparts typically consist of the computationally expensive sequential repetition of off-line techniques. A novel approach to on-line global optimization is to utilize the theory of evolutionary generation systems to develop a technique that is resilient, opportunistic, and inexpensive. The theory of evolutionary generation systems utilizes the probabilistic sequential selection of a candidate optimizer from two possible candidates, basing the selection on the ratio of the fitness values of the candidates and a parameter called the level of selectivity. Using time-homogeneous, irreducible, ergodic Markov chains to model a sequence of local, and hence inexpensive, decisions, this paper proves that such decisions result in the resilient and opportunistic determination of a candidate optimizer for a given objective function. In the limit as the level of selectivity tends to infinity, the theory guarantees that the candidate optimizer is a global optimizer. The optimization of flapping wing gaits illustrates the theory

    Evolutionary robotics in high altitude wind energy applications

    Get PDF
    Recent years have seen the development of wind energy conversion systems that can exploit the superior wind resource that exists at altitudes above current wind turbine technology. One class of these systems incorporates a flying wing tethered to the ground which drives a winch at ground level. The wings often resemble sports kites, being composed of a combination of fabric and stiffening elements. Such wings are subject to load dependent deformation which makes them particularly difficult to model and control. Here we apply the techniques of evolutionary robotics i.e. evolution of neural network controllers using genetic algorithms, to the task of controlling a steerable kite. We introduce a multibody kite simulation that is used in an evolutionary process in which the kite is subject to deformation. We demonstrate how discrete time recurrent neural networks that are evolved to maximise line tension fly the kite in repeated looping trajectories similar to those seen using other methods. We show that these controllers are robust to limited environmental variation but show poor generalisation and occasional failure even after extended evolution. We show that continuous time recurrent neural networks (CTRNNs) can be evolved that are capable of flying appropriate repeated trajectories even when the length of the flying lines are changing. We also show that CTRNNs can be evolved that stabilise kites with a wide range of physical attributes at a given position in the sky, and systematically add noise to the simulated task in order to maximise the transferability of the behaviour to a real world system. We demonstrate how the difficulty of the task must be increased during the evolutionary process to deal with this extreme variability in small increments. We describe the development of a real world testing platform on which the evolved neurocontrollers can be tested
    corecore