2,925 research outputs found

    Neural network controller against environment: A coevolutive approach to generalize robot navigation behavior

    Get PDF
    In this paper, a new coevolutive method, called Uniform Coevolution, is introduced to learn weights of a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collisions avoidance. The introduction of coevolutive over evolutionary strategies allows evolving the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method, without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with/without coevolution have been tested in a set of environments and the capability of generalization is shown for each learned behavior. A simulator based on a mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to examples-based problems.Publicad

    A general learning co-evolution method to generalize autonomous robot navigation behavior

    Get PDF
    Congress on Evolutionary Computation. La Jolla, CA, 16-19 July 2000.A new coevolutive method, called Uniform Coevolution, is introduced, to learn weights for a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collision avoidance. The coevolutive method allows the evolution of the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with or without coevolution have been tested in a set of environments and the capability for generalization has been shown for each learned behavior. A simulator based on the mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to example-based problems

    The Evolution of Reaction-diffusion Controllers for Minimally Cognitive Agents

    Get PDF
    No description supplie

    Evolutionary robotics and neuroscience

    Get PDF
    No description supplie

    Evolution of Swarm Robotics Systems with Novelty Search

    Full text link
    Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task - aggregation, and a more challenging task - sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping the evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.Comment: To appear in Swarm Intelligence (2013), ANTS Special Issue. The final publication will be available at link.springer.co

    Evolving a Behavioral Repertoire for a Walking Robot

    Full text link
    Numerous algorithms have been proposed to allow legged robots to learn to walk. However, the vast majority of these algorithms is devised to learn to walk in a straight line, which is not sufficient to accomplish any real-world mission. Here we introduce the Transferability-based Behavioral Repertoire Evolution algorithm (TBR-Evolution), a novel evolutionary algorithm that simultaneously discovers several hundreds of simple walking controllers, one for each possible direction. By taking advantage of solutions that are usually discarded by evolutionary processes, TBR-Evolution is substantially faster than independently evolving each controller. Our technique relies on two methods: (1) novelty search with local competition, which searches for both high-performing and diverse solutions, and (2) the transferability approach, which com-bines simulations and real tests to evolve controllers for a physical robot. We evaluate this new technique on a hexapod robot. Results show that with only a few dozen short experiments performed on the robot, the algorithm learns a repertoire of con-trollers that allows the robot to reach every point in its reachable space. Overall, TBR-Evolution opens a new kind of learning algorithm that simultaneously optimizes all the achievable behaviors of a robot.Comment: 33 pages; Evolutionary Computation Journal 201

    Flexible couplings: diffusing neuromodulators and adaptive robotics

    Get PDF
    Recent years have seen the discovery of freely diffusing gaseous neurotransmitters, such as nitric oxide (NO), in biological nervous systems. A type of artificial neural network (ANN) inspired by such gaseous signaling, the GasNet, has previously been shown to be more evolvable than traditional ANNs when used as an artificial nervous system in an evolutionary robotics setting, where evolvability means consistent speed to very good solutions¿here, appropriate sensorimotor behavior-generating systems. We present two new versions of the GasNet, which take further inspiration from the properties of neuronal gaseous signaling. The plexus model is inspired by the extraordinary NO-producing cortical plexus structure of neural fibers and the properties of the diffusing NO signal it generates. The receptor model is inspired by the mediating action of neurotransmitter receptors. Both models are shown to significantly further improve evolvability. We describe a series of analyses suggesting that the reasons for the increase in evolvability are related to the flexible loose coupling of distinct signaling mechanisms, one ¿chemical¿ and one ¿electrical.
    corecore