166 research outputs found
Black-Box Data-efficient Policy Search for Robotics
The most data-efficient algorithms for reinforcement learning (RL) in
robotics are based on uncertain dynamical models: after each episode, they
first learn a dynamical model of the robot, then they use an optimization
algorithm to find a policy that maximizes the expected return given the model
and its uncertainties. It is often believed that this optimization can be
tractable only if analytical, gradient-based algorithms are used; however,
these algorithms require using specific families of reward functions and
policies, which greatly limits the flexibility of the overall approach. In this
paper, we introduce a novel model-based RL algorithm, called Black-DROPS
(Black-box Data-efficient RObot Policy Search) that: (1) does not impose any
constraint on the reward function or the policy (they are treated as
black-boxes), (2) is as data-efficient as the state-of-the-art algorithm for
data-efficient RL in robotics, and (3) is as fast (or faster) than analytical
approaches when several cores are available. The key idea is to replace the
gradient-based optimization algorithm with a parallel, black-box algorithm that
takes into account the model uncertainties. We demonstrate the performance of
our new algorithm on two standard control benchmark problems (in simulation)
and a low-cost robotic manipulator (with a real robot).Comment: Accepted at the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS) 2017; Code at
http://github.com/resibots/blackdrops; Video at http://youtu.be/kTEyYiIFGP
A Survey on Policy Search for Robotics
Policy search is a subfield in reinforcement learning which focuses on
finding good parameters for a given policy parametrization. It is well
suited for robotics as it can cope with high-dimensional state and action
spaces, one of the main challenges in robot learning. We review recent
successes of both model-free and model-based policy search in robot
learning.
Model-free policy search is a general approach to learn policies
based on sampled trajectories. We classify model-free methods based on
their policy evaluation strategy, policy update strategy, and exploration
strategy and present a unified view on existing algorithms. Learning a
policy is often easier than learning an accurate forward model, and,
hence, model-free methods are more frequently used in practice. However,
for each sampled trajectory, it is necessary to interact with the
* Both authors contributed equally.
robot, which can be time consuming and challenging in practice. Modelbased
policy search addresses this problem by first learning a simulator
of the robot’s dynamics from data. Subsequently, the simulator generates
trajectories that are used for policy learning. For both modelfree
and model-based policy search methods, we review their respective
properties and their applicability to robotic systems
PILCO: A Model-Based and Data-Efficient Approach to Policy Search
In this paper, we introduce PILCO, a practical, data-efficient model-based policy search method. PILCO reduces model bias, one of the key problems of model-based reinforcement learning, in a principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, PILCO can cope with very little data and facilitates learning from scratch in only a few trials. Policy evaluation is performed in closed form using state-of-the-art approximate inference. Furthermore, policy gradients are computed analytically for policy improvement. We report unprecedented learning efficiency on challenging and high-dimensional control tasks. Copyright 2011 by the author(s)/owner(s)
SwarMAV: A Swarm of Miniature Aerial Vehicles
As the MAV (Micro or Miniature Aerial Vehicles) field matures, we expect to see that the platform's degree of autonomy, the information exchange, and the coordination with other manned and unmanned actors, will become at least as crucial as its aerodynamic design. The project described in this paper explores some aspects of a particularly exciting possible avenue of development: an autonomous swarm of MAVs which exploits its inherent reliability (through redundancy), and its ability to exchange information among the members, in order to cope with a dynamically changing environment and achieve its mission. We describe the successful realization of a prototype experimental platform weighing only 75g, and outline a strategy for the automatic design of a suitable controller
Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics
The most data-efficient algorithms for reinforcement learning in robotics are
model-based policy search algorithms, which alternate between learning a
dynamical model of the robot and optimizing a policy to maximize the expected
return given the model and its uncertainties. Among the few proposed
approaches, the recently introduced Black-DROPS algorithm exploits a black-box
optimization algorithm to achieve both high data-efficiency and good
computation times when several cores are used; nevertheless, like all
model-based policy search approaches, Black-DROPS does not scale to high
dimensional state/action spaces. In this paper, we introduce a new model
learning procedure in Black-DROPS that leverages parameterized black-box priors
to (1) scale up to high-dimensional systems, and (2) be robust to large
inaccuracies of the prior information. We demonstrate the effectiveness of our
approach with the "pendubot" swing-up task in simulation and with a physical
hexapod robot (48D state space, 18D action space) that has to walk forward as
fast as possible. The results show that our new algorithm is more
data-efficient than previous model-based policy search algorithms (with and
without priors) and that it can allow a physical 6-legged robot to learn new
gaits in only 16 to 30 seconds of interaction time.Comment: Accepted at ICRA 2018; 8 pages, 4 figures, 2 algorithms, 1 table;
Video at https://youtu.be/HFkZkhGGzTo ; Spotlight ICRA presentation at
https://youtu.be/_MZYDhfWeL
Specialized Deep Residual Policy Safe Reinforcement Learning-Based Controller for Complex and Continuous State-Action Spaces
Traditional controllers have limitations as they rely on prior knowledge
about the physics of the problem, require modeling of dynamics, and struggle to
adapt to abnormal situations. Deep reinforcement learning has the potential to
address these problems by learning optimal control policies through exploration
in an environment. For safety-critical environments, it is impractical to
explore randomly, and replacing conventional controllers with black-box models
is also undesirable. Also, it is expensive in continuous state and action
spaces, unless the search space is constrained. To address these challenges we
propose a specialized deep residual policy safe reinforcement learning with a
cycle of learning approach adapted for complex and continuous state-action
spaces. Residual policy learning allows learning a hybrid control architecture
where the reinforcement learning agent acts in synchronous collaboration with
the conventional controller. The cycle of learning initiates the policy through
the expert trajectory and guides the exploration around it. Further, the
specialization through the input-output hidden Markov model helps to optimize
policy that lies within the region of interest (such as abnormality), where the
reinforcement learning agent is required and is activated. The proposed
solution is validated on the Tennessee Eastman process control
Model-based reinforcement learning: A survey
Reinforcement learning is an important branch of machine learning and artificial intelligence. Compared with traditional reinforcement learning, model-based reinforcement learning obtains the action of the next state by the model that has been learned, and then optimizes the policy, which greatly improves data efficiency. Based on the present status of research on model-based reinforcement learning at home and abroad, this paper comprehensively reviews the key techniques of model-based reinforcement learning, summarizes the characteristics, advantages and defects of each technology, and analyzes the application of model-based reinforcement learning in games, robotics and brain science
Bayesian Optimization with Automatic Prior Selection for Data-Efficient Direct Policy Search
One of the most interesting features of Bayesian optimization for direct
policy search is that it can leverage priors (e.g., from simulation or from
previous tasks) to accelerate learning on a robot. In this paper, we are
interested in situations for which several priors exist but we do not know in
advance which one fits best the current situation. We tackle this problem by
introducing a novel acquisition function, called Most Likely Expected
Improvement (MLEI), that combines the likelihood of the priors and the expected
improvement. We evaluate this new acquisition function on a transfer learning
task for a 5-DOF planar arm and on a possibly damaged, 6-legged robot that has
to learn to walk on flat ground and on stairs, with priors corresponding to
different stairs and different kinds of damages. Our results show that MLEI
effectively identifies and exploits the priors, even when there is no obvious
match between the current situations and the priors.Comment: Accepted at ICRA 2018; 8 pages, 4 figures, 1 algorithm; Video at
https://youtu.be/xo8mUIZTvNE ; Spotlight ICRA presentation
https://youtu.be/iiVaV-U6Kq
- …