3 research outputs found
ROSARL: Reward-Only Safe Reinforcement Learning
An important problem in reinforcement learning is designing agents that learn
to solve tasks safely in an environment. A common solution is for a human
expert to define either a penalty in the reward function or a cost to be
minimised when reaching unsafe states. However, this is non-trivial, since too
small a penalty may lead to agents that reach unsafe states, while too large a
penalty increases the time to convergence. Additionally, the difficulty in
designing reward or cost functions can increase with the complexity of the
problem. Hence, for a given environment with a given set of unsafe states, we
are interested in finding the upper bound of rewards at unsafe states whose
optimal policies minimise the probability of reaching those unsafe states,
irrespective of task rewards. We refer to this exact upper bound as the "Minmax
penalty", and show that it can be obtained by taking into account both the
controllability and diameter of an environment. We provide a simple practical
model-free algorithm for an agent to learn this Minmax penalty while learning
the task policy, and demonstrate that using it leads to agents that learn safe
policies in high-dimensional continuous control environments
Skill Machines: Temporal Logic Composition in Reinforcement Learning
A major challenge in reinforcement learning is specifying tasks in a manner
that is both interpretable and verifiable. One common approach is to specify
tasks through reward machines -- finite state machines that encode the task to
be solved. We introduce skill machines, a representation that can be learned
directly from these reward machines that encode the solution to such tasks. We
propose a framework where an agent first learns a set of base skills in a
reward-free setting, and then combines these skills with the learned skill
machine to produce composite behaviours specified by any regular language, such
as linear temporal logics. This provides the agent with the ability to map from
complex logical task specifications to near-optimal behaviours zero-shot. We
demonstrate our approach in both a tabular and high-dimensional video game
environment, where an agent is faced with several of these complex,
long-horizon tasks. Our results indicate that the agent is capable of
satisfying extremely complex task specifications, producing near optimal
performance with no further learning. Finally, we demonstrate that the
performance of skill machines can be improved with regular offline
reinforcement learning algorithms when optimal behaviours are desired