5,047 research outputs found
Multi-criteria Evolution of Neural Network Topologies: Balancing Experience and Performance in Autonomous Systems
Majority of Artificial Neural Network (ANN) implementations in autonomous
systems use a fixed/user-prescribed network topology, leading to sub-optimal
performance and low portability. The existing neuro-evolution of augmenting
topology or NEAT paradigm offers a powerful alternative by allowing the network
topology and the connection weights to be simultaneously optimized through an
evolutionary process. However, most NEAT implementations allow the
consideration of only a single objective. There also persists the question of
how to tractably introduce topological diversification that mitigates
overfitting to training scenarios. To address these gaps, this paper develops a
multi-objective neuro-evolution algorithm. While adopting the basic elements of
NEAT, important modifications are made to the selection, speciation, and
mutation processes. With the backdrop of small-robot path-planning
applications, an experience-gain criterion is derived to encapsulate the amount
of diverse local environment encountered by the system. This criterion
facilitates the evolution of genes that support exploration, thereby seeking to
generalize from a smaller set of mission scenarios than possible with
performance maximization alone. The effectiveness of the single-objective
(optimizing performance) and the multi-objective (optimizing performance and
experience-gain) neuro-evolution approaches are evaluated on two different
small-robot cases, with ANNs obtained by the multi-objective optimization
observed to provide superior performance in unseen scenarios
A general learning co-evolution method to generalize autonomous robot navigation behavior
Congress on Evolutionary Computation. La Jolla, CA, 16-19 July 2000.A new coevolutive method, called Uniform Coevolution, is introduced, to learn weights for a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collision avoidance. The coevolutive method allows the evolution of the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with or without coevolution have been tested in a set of environments and the capability for generalization has been shown for each learned behavior. A simulator based on the mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to example-based problems
Neural network controller against environment: A coevolutive approach to generalize robot navigation behavior
In this paper, a new coevolutive method, called Uniform Coevolution, is introduced to learn weights of a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collisions avoidance. The introduction of coevolutive over evolutionary strategies allows evolving the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method, without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with/without coevolution have been tested in a set of environments and the capability of generalization is shown for each learned behavior. A simulator based on a mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to examples-based problems.Publicad
Evolving a Behavioral Repertoire for a Walking Robot
Numerous algorithms have been proposed to allow legged robots to learn to
walk. However, the vast majority of these algorithms is devised to learn to
walk in a straight line, which is not sufficient to accomplish any real-world
mission. Here we introduce the Transferability-based Behavioral Repertoire
Evolution algorithm (TBR-Evolution), a novel evolutionary algorithm that
simultaneously discovers several hundreds of simple walking controllers, one
for each possible direction. By taking advantage of solutions that are usually
discarded by evolutionary processes, TBR-Evolution is substantially faster than
independently evolving each controller. Our technique relies on two methods:
(1) novelty search with local competition, which searches for both
high-performing and diverse solutions, and (2) the transferability approach,
which com-bines simulations and real tests to evolve controllers for a physical
robot. We evaluate this new technique on a hexapod robot. Results show that
with only a few dozen short experiments performed on the robot, the algorithm
learns a repertoire of con-trollers that allows the robot to reach every point
in its reachable space. Overall, TBR-Evolution opens a new kind of learning
algorithm that simultaneously optimizes all the achievable behaviors of a
robot.Comment: 33 pages; Evolutionary Computation Journal 201
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Evolution of Swarm Robotics Systems with Novelty Search
Novelty search is a recent artificial evolution technique that challenges
traditional evolutionary approaches. In novelty search, solutions are rewarded
based on their novelty, rather than their quality with respect to a predefined
objective. The lack of a predefined objective precludes premature convergence
caused by a deceptive fitness function. In this paper, we apply novelty search
combined with NEAT to the evolution of neural controllers for homogeneous
swarms of robots. Our empirical study is conducted in simulation, and we use a
common swarm robotics task - aggregation, and a more challenging task - sharing
of an energy recharging station. Our results show that novelty search is
unaffected by deception, is notably effective in bootstrapping the evolution,
can find solutions with lower complexity than fitness-based evolution, and can
find a broad diversity of solutions for the same task. Even in non-deceptive
setups, novelty search achieves solution qualities similar to those obtained in
traditional fitness-based evolution. Our study also encompasses variants of
novelty search that work in concert with fitness-based evolution to combine the
exploratory character of novelty search with the exploitatory character of
objective-based evolution. We show that these variants can further improve the
performance of novelty search. Overall, our study shows that novelty search is
a promising alternative for the evolution of controllers for robotic swarms.Comment: To appear in Swarm Intelligence (2013), ANTS Special Issue. The final
publication will be available at link.springer.co
Neuroethology, Computational
Over the past decade, a number of neural network researchers have used the term computational neuroethology to describe a specific approach to neuroethology. Neuroethology is the study of the neural mechanisms underlying the generation of behavior in animals, and hence it lies at the intersection of neuroscience (the study of nervous systems) and ethology (the study of animal behavior); for an introduction to neuroethology, see Simmons and Young (1999). The definition of computational neuroethology is very similar, but is not quite so dependent on studying animals: animals just happen to be biological autonomous agents. But there are also non-biological autonomous agents such as some types of robots, and some types of simulated embodied agents operating in virtual worlds. In this context, autonomous agents are self-governing entities capable of operating (i.e., coordinating perception and action) for extended periods of time in environments that are complex, uncertain, and dynamic. Thus, computational neuroethology can be characterised as the attempt to analyze the computational principles underlying the generation of behavior in animals and in artificial autonomous agents
- …