45,943 research outputs found
State-Augmentation Transformations for Risk-Sensitive Reinforcement Learning
In the framework of MDP, although the general reward function takes three
arguments-current state, action, and successor state; it is often simplified to
a function of two arguments-current state and action. The former is called a
transition-based reward function, whereas the latter is called a state-based
reward function. When the objective involves the expected cumulative reward
only, this simplification works perfectly. However, when the objective is
risk-sensitive, this simplification leads to an incorrect value. We present
state-augmentation transformations (SATs), which preserve the reward sequences
as well as the reward distributions and the optimal policy in risk-sensitive
reinforcement learning. In risk-sensitive scenarios, firstly we prove that, for
every MDP with a stochastic transition-based reward function, there exists an
MDP with a deterministic state-based reward function, such that for any given
(randomized) policy for the first MDP, there exists a corresponding policy for
the second MDP, such that both Markov reward processes share the same reward
sequence. Secondly we illustrate that two situations require the proposed SATs
in an inventory control problem. One could be using Q-learning (or other
learning methods) on MDPs with transition-based reward functions, and the other
could be using methods, which are for the Markov processes with a deterministic
state-based reward functions, on the Markov processes with general reward
functions. We show the advantage of the SATs by considering Value-at-Risk as an
example, which is a risk measure on the reward distribution instead of the
measures (such as mean and variance) of the distribution. We illustrate the
error in the reward distribution estimation from the direct use of Q-learning,
and show how the SATs enable a variance formula to work on Markov processes
with general reward functions
Probabilistically Safe Policy Transfer
Although learning-based methods have great potential for robotics, one
concern is that a robot that updates its parameters might cause large amounts
of damage before it learns the optimal policy. We formalize the idea of safe
learning in a probabilistic sense by defining an optimization problem: we
desire to maximize the expected return while keeping the expected damage below
a given safety limit. We study this optimization for the case of a robot
manipulator with safety-based torque limits. We would like to ensure that the
damage constraint is maintained at every step of the optimization and not just
at convergence. To achieve this aim, we introduce a novel method which predicts
how modifying the torque limit, as well as how updating the policy parameters,
might affect the robot's safety. We show through a number of experiments that
our approach allows the robot to improve its performance while ensuring that
the expected damage constraint is not violated during the learning process
Budgeted Reinforcement Learning in Continuous State Space
A Budgeted Markov Decision Process (BMDP) is an extension of a Markov
Decision Process to critical applications requiring safety constraints. It
relies on a notion of risk implemented in the shape of a cost signal
constrained to lie below an - adjustable - threshold. So far, BMDPs could only
be solved in the case of finite state spaces with known dynamics. This work
extends the state-of-the-art to continuous spaces environments and unknown
dynamics. We show that the solution to a BMDP is a fixed point of a novel
Budgeted Bellman Optimality operator. This observation allows us to introduce
natural extensions of Deep Reinforcement Learning algorithms to address
large-scale BMDPs. We validate our approach on two simulated applications:
spoken dialogue and autonomous driving.Comment: N. Carrara and E. Leurent have equally contribute
Control Regularization for Reduced Variance Reinforcement Learning
Dealing with high variance is a significant challenge in model-free
reinforcement learning (RL). Existing methods are unreliable, exhibiting high
variance in performance from run to run using different initializations/seeds.
Focusing on problems arising in continuous control, we propose a functional
regularization approach to augmenting model-free RL. In particular, we
regularize the behavior of the deep policy to be similar to a policy prior,
i.e., we regularize in function space. We show that functional regularization
yields a bias-variance trade-off, and propose an adaptive tuning strategy to
optimize this trade-off. When the policy prior has control-theoretic stability
guarantees, we further show that this regularization approximately preserves
those stability guarantees throughout learning. We validate our approach
empirically on a range of settings, and demonstrate significantly reduced
variance, guaranteed dynamic stability, and more efficient learning than deep
RL alone.Comment: Appearing in ICML 201
- …