17 research outputs found
Policy Gradients for CVaR-Constrained MDPs
We study a risk-constrained version of the stochastic shortest path (SSP)
problem, where the risk measure considered is Conditional Value-at-Risk (CVaR).
We propose two algorithms that obtain a locally risk-optimal policy by
employing four tools: stochastic approximation, mini batches, policy gradients
and importance sampling. Both the algorithms incorporate a CVaR estimation
procedure, along the lines of Bardou et al. [2009], which in turn is based on
Rockafellar-Uryasev's representation for CVaR and utilize the likelihood ratio
principle for estimating the gradient of the sum of one cost function
(objective of the SSP) and the gradient of the CVaR of the sum of another cost
function (in the constraint of SSP). The algorithms differ in the manner in
which they approximate the CVaR estimates/necessary gradients - the first
algorithm uses stochastic approximation, while the second employ mini-batches
in the spirit of Monte Carlo methods. We establish asymptotic convergence of
both the algorithms. Further, since estimating CVaR is related to rare-event
simulation, we incorporate an importance sampling based variance reduction
scheme into our proposed algorithms
State Augmented Constrained Reinforcement Learning: Overcoming the Limitations of Learning with Rewards
Constrained reinforcement learning involves multiple rewards that must
individually accumulate to given thresholds. In this class of problems, we show
a simple example in which the desired optimal policy cannot be induced by any
linear combination of rewards. Hence, there exist constrained reinforcement
learning problems for which neither regularized nor classical primal-dual
methods yield optimal policies. This work addresses this shortcoming by
augmenting the state with Lagrange multipliers and reinterpreting primal-dual
methods as the portion of the dynamics that drives the multipliers evolution.
This approach provides a systematic state augmentation procedure that is
guaranteed to solve reinforcement learning problems with constraints. Thus,
while primal-dual methods can fail at finding optimal policies, running the
dual dynamics while executing the augmented policy yields an algorithm that
provably samples actions from the optimal policy