An important problem in reinforcement learning is designing agents that learn
to solve tasks safely in an environment. A common solution is for a human
expert to define either a penalty in the reward function or a cost to be
minimised when reaching unsafe states. However, this is non-trivial, since too
small a penalty may lead to agents that reach unsafe states, while too large a
penalty increases the time to convergence. Additionally, the difficulty in
designing reward or cost functions can increase with the complexity of the
problem. Hence, for a given environment with a given set of unsafe states, we
are interested in finding the upper bound of rewards at unsafe states whose
optimal policies minimise the probability of reaching those unsafe states,
irrespective of task rewards. We refer to this exact upper bound as the "Minmax
penalty", and show that it can be obtained by taking into account both the
controllability and diameter of an environment. We provide a simple practical
model-free algorithm for an agent to learn this Minmax penalty while learning
the task policy, and demonstrate that using it leads to agents that learn safe
policies in high-dimensional continuous control environments