6,220 research outputs found
End-to-End Safe Reinforcement Learning through Barrier Functions for Safety-Critical Continuous Control Tasks
Reinforcement Learning (RL) algorithms have found limited success beyond
simulated applications, and one main reason is the absence of safety guarantees
during the learning process. Real world systems would realistically fail or
break before an optimal controller can be learned. To address this issue, we
propose a controller architecture that combines (1) a model-free RL-based
controller with (2) model-based controllers utilizing control barrier functions
(CBFs) and (3) on-line learning of the unknown system dynamics, in order to
ensure safety during learning. Our general framework leverages the success of
RL algorithms to learn high-performance controllers, while the CBF-based
controllers both guarantee safety and guide the learning process by
constraining the set of explorable polices. We utilize Gaussian Processes (GPs)
to model the system dynamics and its uncertainties.
Our novel controller synthesis algorithm, RL-CBF, guarantees safety with high
probability during the learning process, regardless of the RL algorithm used,
and demonstrates greater policy exploration efficiency. We test our algorithm
on (1) control of an inverted pendulum and (2) autonomous car-following with
wireless vehicle-to-vehicle communication, and show that our algorithm attains
much greater sample efficiency in learning than other state-of-the-art
algorithms and maintains safety during the entire learning process.Comment: Published in AAAI 201
On the Design of LQR Kernels for Efficient Controller Learning
Finding optimal feedback controllers for nonlinear dynamic systems from data
is hard. Recently, Bayesian optimization (BO) has been proposed as a powerful
framework for direct controller tuning from experimental trials. For selecting
the next query point and finding the global optimum, BO relies on a
probabilistic description of the latent objective function, typically a
Gaussian process (GP). As is shown herein, GPs with a common kernel choice can,
however, lead to poor learning outcomes on standard quadratic control problems.
For a first-order system, we construct two kernels that specifically leverage
the structure of the well-known Linear Quadratic Regulator (LQR), yet retain
the flexibility of Bayesian nonparametric learning. Simulations of uncertain
linear and nonlinear systems demonstrate that the LQR kernels yield superior
learning performance.Comment: 8 pages, 5 figures, to appear in 56th IEEE Conference on Decision and
Control (CDC 2017
Deep Reinforcement Learning for Tensegrity Robot Locomotion
Tensegrity robots, composed of rigid rods connected by elastic cables, have a
number of unique properties that make them appealing for use as planetary
exploration rovers. However, control of tensegrity robots remains a difficult
problem due to their unusual structures and complex dynamics. In this work, we
show how locomotion gaits can be learned automatically using a novel extension
of mirror descent guided policy search (MDGPS) applied to periodic locomotion
movements, and we demonstrate the effectiveness of our approach on tensegrity
robot locomotion. We evaluate our method with real-world and simulated
experiments on the SUPERball tensegrity robot, showing that the learned
policies generalize to changes in system parameters, unreliable sensor
measurements, and variation in environmental conditions, including varied
terrains and a range of different gravities. Our experiments demonstrate that
our method not only learns fast, power-efficient feedback policies for rolling
gaits, but that these policies can succeed with only the limited onboard
sensing provided by SUPERball's accelerometers. We compare the learned feedback
policies to learned open-loop policies and hand-engineered controllers, and
demonstrate that the learned policy enables the first continuous, reliable
locomotion gait for the real SUPERball robot. Our code and other supplementary
materials are available from http://rll.berkeley.edu/drl_tensegrityComment: International Conference on Robotics and Automation (ICRA), 2017.
Project website link is http://rll.berkeley.edu/drl_tensegrit
Control Regularization for Reduced Variance Reinforcement Learning
Dealing with high variance is a significant challenge in model-free
reinforcement learning (RL). Existing methods are unreliable, exhibiting high
variance in performance from run to run using different initializations/seeds.
Focusing on problems arising in continuous control, we propose a functional
regularization approach to augmenting model-free RL. In particular, we
regularize the behavior of the deep policy to be similar to a policy prior,
i.e., we regularize in function space. We show that functional regularization
yields a bias-variance trade-off, and propose an adaptive tuning strategy to
optimize this trade-off. When the policy prior has control-theoretic stability
guarantees, we further show that this regularization approximately preserves
those stability guarantees throughout learning. We validate our approach
empirically on a range of settings, and demonstrate significantly reduced
variance, guaranteed dynamic stability, and more efficient learning than deep
RL alone.Comment: Appearing in ICML 201
- …