26 research outputs found

    Genetic programming based automatic gait generation for quadruped robots

    Full text link
    This paper introduces a new approach to develop a fast gait for quadruped robot using genetic programming (GP). Several recent approaches have focused on the genetic algorithm (GA) to generate a gait automatically and shown significant improvements over previous results. Most of current GA based approaches use pre-selected parameters, but it is difficult to select the appropriate parameters for the optimization of gait. To overcome these problems of GA based approach, we proposed an efficient approach which optimizes joint angle trajectories using genetic programming. Our GP based method has obtained much better results than GA based approaches for experiments of Sony AIBO ers-7 in Webots environment. The elite archive mechanism(EAM) was introduced to prevent premature convergence problems in GP and has shown improvements

    How robot morphology and training order affect the learning of multiple behaviors

    Get PDF
    Abstract — Automatically synthesizing behaviors for robots with articulated bodies poses a number of challenges beyond those encountered when generating behaviors for simpler agents. One such challenge is how to optimize a controller that can orchestrate dynamic motion of different parts of the body at different times. This paper presents an incremental shaping method that addresses this challenge: it trains a controller to both coordinate a robot’s leg motions to achieve directed locomotion toward an object, and then coordinate gripper motion to achieve lifting once the object is reached. It is shown that success is dependent on the order in which these behaviors are learned, and that despite the fact that one robot can master these behaviors better than another with a different morphology, this learning order is invariant across the two robot morphologies investigated here. This suggests that aspects of the task environment, learning algorithm or the controller dictate learning order more than the choice of morphology. I

    Evolving Monolithic Robot Controllers through Incremental Shaping

    Get PDF
    Evolutionary robotics has been shown to be an effective technique for generating robot behaviors that are difficult to derive analytically from the robot’s mechanics and task environment. Moreover, augmenting evolutionary algorithms with environmental scaffolding via an incremental shaping method makes it possible to evolve controllers for complex tasks that would otherwise be infeasible. In this paper we present a summary of two recent publications in the evolutionary robotics literature demonstrating how these methods can be used to evolve robot controllers for non-trivial tasks, what the obstacles are in evolving controllers in this way, and present a novel research question that can be investigated under this framework

    Evolutionary Robotics

    Get PDF
    info:eu-repo/semantics/publishedVersio

    La computación evolutiva y sus paradigmas

    Get PDF
    RESUMENBasado en los principios de selección natural descritos por Charles Darwin en su libro” El Origen de las Especies”, tiene su origen lo que se conoce como Computación evolutiva. El tomar como referente la naturaleza y cada uno de los procesos inmersos en ella, ha sido la fuente para resolver problemas sobre todo en el ámbito de la computación o en complejos procesos de optimización o cálculo. La Computación evolutiva se puede considerar como una agrupación de técnicas que responden a diferentes problemáticas, estas técnicas son: La Programación Evolutiva, las Estrategias Evolutivas y los Algoritmos Genéticos (AG), y en los años 90 la Programación Genética. La Computación Evolutiva no se ve supeditada a la resolución del problema en sí, por el contrario, ésta incluye el proceso de aprendizaje, por tal razón, está relacionado con la Inteligencia Artificial. En el presente artículo se presenta una recopilación del estado actual en el que se ha enfocado la computación evolutiva, desde sus tres principales ejes.ABSTRACTBased on the principles of natural selection described by Charles Darwin in his book "The Origin of Species" has its origin so that known as evolutionary computing. Taking as reference the nature and each of the processes involved in it, has been the source for solve problems especially in the field of computer or complex process optimization or calculus. Evolutionary Computation can be considered as a collection of techniques that address different problems, these techniques are: Evolutionary Programming, Evolutionary Strategies and Genetic Algorithms (GA), and the Genetic Programming 90 years. Evolutionary Computation is not subject to resolution of the problem itself, however, it includes the learning process, therefore it is related to Artificial Intelligence

    How robot morphology and training order affect the learning of multiple behaviors

    Get PDF
    Automatically synthesizing behaviors for robots with articulated bodies poses a number of challenges beyond those encountered when generating behaviors for simpler agents. One such challenge is how to optimize a controller that can orchestrate dynamic motion of different parts of the body at different times. This paper presents an incremental shaping method that addresses this challenge: it trains a controller to both coordinate a robot's leg motions to achieve directed locomotion toward an object, and then coordinate gripper motion to achieve lifting once the object is reached. It is shown that success is dependent on the order in which these behaviors are learned, and that despite the fact that one robot can master these behaviors better than another with a different morphology, this learning order is invariant across the two robot morphologies investigated here. This suggests that aspects of the task environment, learning algorithm or the controller dictate learning order more than the choice of morphology

    Novelty search creates robots with general skills for exploration

    Get PDF
    ABSTRACT Novelty Search, a new type of Evolutionary Algorithm, has shown much promise in the last few years. Instead of selecting for phenotypes that are closer to an objective, Novelty Search assigns rewards based on how di↵erent the phenotypes are from those already generated. A common criticism of Novelty Search is that it is e↵ectively random or exhaustive search because it tries solutions in an unordered manner until a correct one is found. Its creators respond that over time Novelty Search accumulates information about the environment in the form of skills relevant to reaching uncharted territory, but to date no evidence for that hypothesis has been presented. In this paper we test that hypothesis by transferring robots evolved under Novelty Search to new environments (here, mazes) to see if the skills they've acquired generalize. Three lines of evidence support the claim that Novelty Search agents do indeed learn general exploration skills. First, robot controllers evolved via Novelty Search in one maze and then transferred to a new maze explore significantly more of the new environment than nonevolved (randomly generated) agents. Second, a Novelty Search process to solve the new mazes works significantly faster when seeded with the transferred controllers versus randomly-generated ones. Third, no significant di↵erence exists when comparing two types of transferred agents: those evolved in the original maze under (1) Novelty Search vs. (2) a traditional, objective-based fitness function. The evidence gathered suggests that, like traditional Evolutionary Algorithms with objective-based fitness functions, Novelty Search is not a random or exhaustive search process, but instead is accumulating information about the environment, resulting in phenotypes possessing skills needed to explore their world. Keywords Novelty Search; Evolutionary Robotics Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Evolving a Behavioral Repertoire for a Walking Robot

    Full text link
    Numerous algorithms have been proposed to allow legged robots to learn to walk. However, the vast majority of these algorithms is devised to learn to walk in a straight line, which is not sufficient to accomplish any real-world mission. Here we introduce the Transferability-based Behavioral Repertoire Evolution algorithm (TBR-Evolution), a novel evolutionary algorithm that simultaneously discovers several hundreds of simple walking controllers, one for each possible direction. By taking advantage of solutions that are usually discarded by evolutionary processes, TBR-Evolution is substantially faster than independently evolving each controller. Our technique relies on two methods: (1) novelty search with local competition, which searches for both high-performing and diverse solutions, and (2) the transferability approach, which com-bines simulations and real tests to evolve controllers for a physical robot. We evaluate this new technique on a hexapod robot. Results show that with only a few dozen short experiments performed on the robot, the algorithm learns a repertoire of con-trollers that allows the robot to reach every point in its reachable space. Overall, TBR-Evolution opens a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot.Comment: 33 pages; Evolutionary Computation Journal 201

    Pace running of a quadruped robot driven by pneumatic muscle actuators : An experimental study

    Get PDF
    Our goal is to design a neuromorphic locomotion controller for a prospective bioinspired quadruped robot driven by artificial muscle actuators. In this paper, we focus on achieving a running gait called a pace, in which the ipsilateral pairs of legs move in phase, while the two pairs together move out of phase, by a quadruped robot with realistic legs driven by pneumatic muscle actuators. The robot is controlled by weakly coupled two-level central pattern generators to generate a pace gait with leg loading feedback. Each leg is moved through four sequential phases like an animal, i.e., touch-down, stance, lift-off, and swing phases. We find that leg loading feedback to the central pattern generator can contribute to stabilizing pace running with an appropriate cycle autonomously determined by synchronizing each leg’s oscillation with the roll body oscillation without a human specifying the cycle. The experimental results conclude that our proposed neuromorphic controller is beneficial for achieving pace running by a muscle-driven quadruped robot
    corecore