As safety is of paramount importance in robotics, reinforcement learning that
reflects safety, called safe RL, has been studied extensively. In safe RL, we
aim to find a policy which maximizes the desired return while satisfying the
defined safety constraints. There are various types of constraints, among which
constraints on conditional value at risk (CVaR) effectively lower the
probability of failures caused by high costs since CVaR is a conditional
expectation obtained above a certain percentile. In this paper, we propose a
trust region-based safe RL method with CVaR constraints, called TRC. We first
derive the upper bound on CVaR and then approximate the upper bound in a
differentiable form in a trust region. Using this approximation, a subproblem
to get policy gradients is formulated, and policies are trained by iteratively
solving the subproblem. TRC is evaluated through safe navigation tasks in
simulations with various robots and a sim-to-real environment with a Jackal
robot from Clearpath. Compared to other safe RL methods, the performance is
improved by 1.93 times while the constraints are satisfied in all experiments.Comment: RA-L and ICRA 202