6,896 research outputs found
A general learning co-evolution method to generalize autonomous robot navigation behavior
Congress on Evolutionary Computation. La Jolla, CA, 16-19 July 2000.A new coevolutive method, called Uniform Coevolution, is introduced, to learn weights for a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collision avoidance. The coevolutive method allows the evolution of the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with or without coevolution have been tested in a set of environments and the capability for generalization has been shown for each learned behavior. A simulator based on the mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to example-based problems
Neural network controller against environment: A coevolutive approach to generalize robot navigation behavior
In this paper, a new coevolutive method, called Uniform Coevolution, is introduced to learn weights of a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collisions avoidance. The introduction of coevolutive over evolutionary strategies allows evolving the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method, without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with/without coevolution have been tested in a set of environments and the capability of generalization is shown for each learned behavior. A simulator based on a mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to examples-based problems.Publicad
Dynamic Motion Modelling for Legged Robots
An accurate motion model is an important component in modern-day robotic
systems, but building such a model for a complex system often requires an
appreciable amount of manual effort. In this paper we present a motion model
representation, the Dynamic Gaussian Mixture Model (DGMM), that alleviates the
need to manually design the form of a motion model, and provides a direct means
of incorporating auxiliary sensory data into the model. This representation and
its accompanying algorithms are validated experimentally using an 8-legged
kinematically complex robot, as well as a standard benchmark dataset. The
presented method not only learns the robot's motion model, but also improves
the model's accuracy by incorporating information about the terrain surrounding
the robot
Incremental Adversarial Domain Adaptation for Continually Changing Environments
Continuous appearance shifts such as changes in weather and lighting
conditions can impact the performance of deployed machine learning models.
While unsupervised domain adaptation aims to address this challenge, current
approaches do not utilise the continuity of the occurring shifts. In
particular, many robotics applications exhibit these conditions and thus
facilitate the potential to incrementally adapt a learnt model over minor
shifts which integrate to massive differences over time. Our work presents an
adversarial approach for lifelong, incremental domain adaptation which benefits
from unsupervised alignment to a series of intermediate domains which
successively diverge from the labelled source domain. We empirically
demonstrate that our incremental approach improves handling of large appearance
changes, e.g. day to night, on a traversable-path segmentation task compared
with a direct, single alignment step approach. Furthermore, by approximating
the feature distribution for the source domain with a generative adversarial
network, the deployment module can be rendered fully independent of retaining
potentially large amounts of the related source training data for only a minor
reduction in performance.Comment: International Conference on Robotics and Automation 201
A tesselated probabilistic representation for spatial robot perception and navigation
The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations
- …