58,124 research outputs found
Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
Reinforcement learning (RL) algorithms for real-world robotic applications
need a data-efficient learning process and the ability to handle complex,
unknown dynamical systems. These requirements are handled well by model-based
and model-free RL approaches, respectively. In this work, we aim to combine the
advantages of these two types of methods in a principled manner. By focusing on
time-varying linear-Gaussian policies, we enable a model-based algorithm based
on the linear quadratic regulator (LQR) that can be integrated into the
model-free framework of path integral policy improvement (PI2). We can further
combine our method with guided policy search (GPS) to train arbitrary
parameterized policies such as deep neural networks. Our simulation and
real-world experiments demonstrate that this method can solve challenging
manipulation tasks with comparable or better performance than model-free
methods while maintaining the sample efficiency of model-based methods. A video
presenting our results is available at
https://sites.google.com/site/icml17pilqrComment: Paper accepted to the International Conference on Machine Learning
(ICML) 201
Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics
The most data-efficient algorithms for reinforcement learning in robotics are
model-based policy search algorithms, which alternate between learning a
dynamical model of the robot and optimizing a policy to maximize the expected
return given the model and its uncertainties. Among the few proposed
approaches, the recently introduced Black-DROPS algorithm exploits a black-box
optimization algorithm to achieve both high data-efficiency and good
computation times when several cores are used; nevertheless, like all
model-based policy search approaches, Black-DROPS does not scale to high
dimensional state/action spaces. In this paper, we introduce a new model
learning procedure in Black-DROPS that leverages parameterized black-box priors
to (1) scale up to high-dimensional systems, and (2) be robust to large
inaccuracies of the prior information. We demonstrate the effectiveness of our
approach with the "pendubot" swing-up task in simulation and with a physical
hexapod robot (48D state space, 18D action space) that has to walk forward as
fast as possible. The results show that our new algorithm is more
data-efficient than previous model-based policy search algorithms (with and
without priors) and that it can allow a physical 6-legged robot to learn new
gaits in only 16 to 30 seconds of interaction time.Comment: Accepted at ICRA 2018; 8 pages, 4 figures, 2 algorithms, 1 table;
Video at https://youtu.be/HFkZkhGGzTo ; Spotlight ICRA presentation at
https://youtu.be/_MZYDhfWeL
Fast Model Identification via Physics Engines for Data-Efficient Policy Search
This paper presents a method for identifying mechanical parameters of robots
or objects, such as their mass and friction coefficients. Key features are the
use of off-the-shelf physics engines and the adaptation of a Bayesian
optimization technique towards minimizing the number of real-world experiments
needed for model-based reinforcement learning. The proposed framework
reproduces in a physics engine experiments performed on a real robot and
optimizes the model's mechanical parameters so as to match real-world
trajectories. The optimized model is then used for learning a policy in
simulation, before real-world deployment. It is well understood, however, that
it is hard to exactly reproduce real trajectories in simulation. Moreover, a
near-optimal policy can be frequently found with an imperfect model. Therefore,
this work proposes a strategy for identifying a model that is just good enough
to approximate the value of a locally optimal policy with a certain confidence,
instead of wasting effort on identifying the most accurate model. Evaluations,
performed both in simulation and on a real robotic manipulation task, indicate
that the proposed strategy results in an overall time-efficient, integrated
model identification and learning solution, which significantly improves the
data-efficiency of existing policy search algorithms.Comment: IJCAI 1
Combining Physical Simulators and Object-Based Networks for Control
Physics engines play an important role in robot planning and control;
however, many real-world control problems involve complex contact dynamics that
cannot be characterized analytically. Most physics engines therefore employ .
approximations that lead to a loss in precision. In this paper, we propose a
hybrid dynamics model, simulator-augmented interaction networks (SAIN),
combining a physics engine with an object-based neural network for dynamics
modeling. Compared with existing models that are purely analytical or purely
data-driven, our hybrid model captures the dynamics of interacting objects in a
more accurate and data-efficient manner.Experiments both in simulation and on a
real robot suggest that it also leads to better performance when used in
complex control tasks. Finally, we show that our model generalizes to novel
environments with varying object shapes and materials.Comment: ICRA 2019; Project page: http://sain.csail.mit.ed
Is the Bellman residual a bad proxy?
This paper aims at theoretically and empirically comparing two standard
optimization criteria for Reinforcement Learning: i) maximization of the mean
value and ii) minimization of the Bellman residual. For that purpose, we place
ourselves in the framework of policy search algorithms, that are usually
designed to maximize the mean value, and derive a method that minimizes the
residual over policies. A theoretical analysis
shows how good this proxy is to policy optimization, and notably that it is
better than its value-based counterpart. We also propose experiments on
randomly generated generic Markov decision processes, specifically designed for
studying the influence of the involved concentrability coefficient. They show
that the Bellman residual is generally a bad proxy to policy optimization and
that directly maximizing the mean value is much better, despite the current
lack of deep theoretical analysis. This might seem obvious, as directly
addressing the problem of interest is usually better, but given the prevalence
of (projected) Bellman residual minimization in value-based reinforcement
learning, we believe that this question is worth to be considered.Comment: Final NIPS 2017 version (title, among other things, changed
- …