918 research outputs found
Recommended from our members
Understanding Model-Based Reinforcement Learning and its Application in Safe Reinforcement Learning
Model-based reinforcement learning algorithms have been shown to achieve successful results on various continuous control benchmarks, but the understanding of model-based methods is limited. We try to interpret how model-based method works through novel experiments on state-of-the-art algorithms with an emphasis on the model learning part. We evaluate the role of the model learning in policy optimization and propose methods to learn a more accurate model. With a better understanding of model-based reinforcement learning, we then apply model-based methods to solve safe reinforcement learning (RL) problems with near-zero violation of hard constraints throughout training. Drawing an analogy with how humans and animals learn to perform safe actions, we break down the safe RL problem into three stages. First, we train agents in a constraint-free environment to learn a performant policy for reaching high rewards, and simultaneously learn a model of the dynamics. Second, we use model-based methods to plan safe actions and train a safeguarding policy from these actions through imitation. Finally, we propose a factored framework to train an overall policy that mixes the performant policy and the safeguarding policy. This three-step curriculum ensures near-zero violation of safety constraints at all times. As an advantage of model-based method, the sample complexity required at the second and third steps of the process is significantly lower than model-free methods and can enable online safe learning. We demonstrate the effectiveness of our methods in various continuous control problems and analyze the advantages over state-of-the-art approaches
Safety-guided deep reinforcement learning via online gaussian process estimation
An important facet of reinforcement learning (RL) has to do with how the agent goes about exploring the environment. Traditional exploration strategies typically focus on efficiency and ignore safety. However, for practical applications, ensuring safety of the agent during exploration is crucial since performing an unsafe action or reaching an unsafe state could result in irreversible damage to the agent. The main challenge of safe exploration is that characterizing the unsafe states and actions is difficult for large continuous state or action spaces and unknown environments. In this paper, we propose a novel approach to incorporate estimations of safety to guide exploration and policy search in deep reinforcement learning. By using a cost function to capture trajectory-based safety, our key idea is to formulate the state-action value function of this safety cost as a candidate Lyapunov function and extend control-theoretic results to approximate its derivative using online Gaussian Process (GP) estimation. We show how to use these statistical models to guide the agent in unknown environments to obtain high-performance control policies with provable stability certificates.Accepted manuscrip
End-to-End Safe Reinforcement Learning through Barrier Functions for Safety-Critical Continuous Control Tasks
Reinforcement Learning (RL) algorithms have found limited success beyond
simulated applications, and one main reason is the absence of safety guarantees
during the learning process. Real world systems would realistically fail or
break before an optimal controller can be learned. To address this issue, we
propose a controller architecture that combines (1) a model-free RL-based
controller with (2) model-based controllers utilizing control barrier functions
(CBFs) and (3) on-line learning of the unknown system dynamics, in order to
ensure safety during learning. Our general framework leverages the success of
RL algorithms to learn high-performance controllers, while the CBF-based
controllers both guarantee safety and guide the learning process by
constraining the set of explorable polices. We utilize Gaussian Processes (GPs)
to model the system dynamics and its uncertainties.
Our novel controller synthesis algorithm, RL-CBF, guarantees safety with high
probability during the learning process, regardless of the RL algorithm used,
and demonstrates greater policy exploration efficiency. We test our algorithm
on (1) control of an inverted pendulum and (2) autonomous car-following with
wireless vehicle-to-vehicle communication, and show that our algorithm attains
much greater sample efficiency in learning than other state-of-the-art
algorithms and maintains safety during the entire learning process.Comment: Published in AAAI 201
Iterative Reachability Estimation for Safe Reinforcement Learning
Ensuring safety is important for the practical deployment of reinforcement
learning (RL). Various challenges must be addressed, such as handling
stochasticity in the environments, providing rigorous guarantees of persistent
state-wise safety satisfaction, and avoiding overly conservative behaviors that
sacrifice performance. We propose a new framework, Reachability Estimation for
Safe Policy Optimization (RESPO), for safety-constrained RL in general
stochastic settings. In the feasible set where there exist violation-free
policies, we optimize for rewards while maintaining persistent safety. Outside
this feasible set, our optimization produces the safest behavior by
guaranteeing entrance into the feasible set whenever possible with the least
cumulative discounted violations. We introduce a class of algorithms using our
novel reachability estimation function to optimize in our proposed framework
and in similar frameworks such as those concurrently handling multiple hard and
soft constraints. We theoretically establish that our algorithms almost surely
converge to locally optimal policies of our safe optimization framework. We
evaluate the proposed methods on a diverse suite of safe RL environments from
Safety Gym, PyBullet, and MuJoCo, and show the benefits in improving both
reward performance and safety compared with state-of-the-art baselines.Comment: Accepted in NeurIPS 202
- …