1,796 research outputs found
Control Regularization for Reduced Variance Reinforcement Learning
Dealing with high variance is a significant challenge in model-free
reinforcement learning (RL). Existing methods are unreliable, exhibiting high
variance in performance from run to run using different initializations/seeds.
Focusing on problems arising in continuous control, we propose a functional
regularization approach to augmenting model-free RL. In particular, we
regularize the behavior of the deep policy to be similar to a policy prior,
i.e., we regularize in function space. We show that functional regularization
yields a bias-variance trade-off, and propose an adaptive tuning strategy to
optimize this trade-off. When the policy prior has control-theoretic stability
guarantees, we further show that this regularization approximately preserves
those stability guarantees throughout learning. We validate our approach
empirically on a range of settings, and demonstrate significantly reduced
variance, guaranteed dynamic stability, and more efficient learning than deep
RL alone.Comment: Appearing in ICML 201
Safety-guided deep reinforcement learning via online gaussian process estimation
An important facet of reinforcement learning (RL) has to do with how the agent goes about exploring the environment. Traditional exploration strategies typically focus on efficiency and ignore safety. However, for practical applications, ensuring safety of the agent during exploration is crucial since performing an unsafe action or reaching an unsafe state could result in irreversible damage to the agent. The main challenge of safe exploration is that characterizing the unsafe states and actions is difficult for large continuous state or action spaces and unknown environments. In this paper, we propose a novel approach to incorporate estimations of safety to guide exploration and policy search in deep reinforcement learning. By using a cost function to capture trajectory-based safety, our key idea is to formulate the state-action value function of this safety cost as a candidate Lyapunov function and extend control-theoretic results to approximate its derivative using online Gaussian Process (GP) estimation. We show how to use these statistical models to guide the agent in unknown environments to obtain high-performance control policies with provable stability certificates.Accepted manuscrip
Adding Neural Network Controllers to Behavior Trees without Destroying Performance Guarantees
In this paper, we show how Behavior Trees that have performance guarantees,
in terms of safety and goal convergence, can be extended with components that
were designed using machine learning, without destroying those performance
guarantees.
Machine learning approaches such as reinforcement learning or learning from
demonstration can be very appealing to AI designers that want efficient and
realistic behaviors in their agents. However, those algorithms seldom provide
guarantees for solving the given task in all different situations while keeping
the agent safe. Instead, such guarantees are often easier to find for manually
designed model based approaches. In this paper we exploit the modularity of
Behavior trees to extend a given design with an efficient, but possibly
unreliable, machine learning component in a way that preserves the guarantees.
The approach is illustrated with an inverted pendulum example.Comment: Submitted to IEEE Transactions on Game
- …