49 research outputs found
Verifiable Reinforcement Learning via Policy Extraction
While deep reinforcement learning has successfully solved many challenging
control tasks, its real-world applicability has been limited by the inability
to ensure the safety of learned policies. We propose an approach to verifiable
reinforcement learning by training decision tree policies, which can represent
complex policies (since they are nonparametric), yet can be efficiently
verified using existing techniques (since they are highly structured). The
challenge is that decision tree policies are difficult to train. We propose
VIPER, an algorithm that combines ideas from model compression and imitation
learning to learn decision tree policies guided by a DNN policy (called the
oracle) and its Q-function, and show that it substantially outperforms two
baselines. We use VIPER to (i) learn a provably robust decision tree policy for
a variant of Atari Pong with a symbolic state space, (ii) learn a decision tree
policy for a toy game based on Pong that provably never loses, and (iii) learn
a provably stable decision tree policy for cart-pole. In each case, the
decision tree policy achieves performance equal to that of the original DNN
policy
The Assistive Multi-Armed Bandit
Learning preferences implicit in the choices humans make is a well studied
problem in both economics and computer science. However, most work makes the
assumption that humans are acting (noisily) optimally with respect to their
preferences. Such approaches can fail when people are themselves learning about
what they want. In this work, we introduce the assistive multi-armed bandit,
where a robot assists a human playing a bandit task to maximize cumulative
reward. In this problem, the human does not know the reward function but can
learn it through the rewards received from arm pulls; the robot only observes
which arms the human pulls but not the reward associated with each pull. We
offer sufficient and necessary conditions for successfully assisting the human
in this framework. Surprisingly, better human performance in isolation does not
necessarily lead to better performance when assisted by the robot: a human
policy can do better by effectively communicating its observed rewards to the
robot. We conduct proof-of-concept experiments that support these results. We
see this work as contributing towards a theory behind algorithms for
human-robot interaction.Comment: Accepted to HRI 201
Enabling Robots to Communicate their Objectives
The overarching goal of this work is to efficiently enable end-users to
correctly anticipate a robot's behavior in novel situations. Since a robot's
behavior is often a direct result of its underlying objective function, our
insight is that end-users need to have an accurate mental model of this
objective function in order to understand and predict what the robot will do.
While people naturally develop such a mental model over time through observing
the robot act, this familiarization process may be lengthy. Our approach
reduces this time by having the robot model how people infer objectives from
observed behavior, and then it selects those behaviors that are maximally
informative. The problem of computing a posterior over objectives from observed
behavior is known as Inverse Reinforcement Learning (IRL), and has been applied
to robots learning human objectives. We consider the problem where the roles of
human and robot are swapped. Our main contribution is to recognize that unlike
robots, humans will not be exact in their IRL inference. We thus introduce two
factors to define candidate approximate-inference models for human learning in
this setting, and analyze them in a user study in the autonomous driving
domain. We show that certain approximate-inference models lead to the robot
generating example behaviors that better enable users to anticipate what it
will do in novel situations. Our results also suggest, however, that additional
research is needed in modeling how humans extrapolate from examples of robot
behavior.Comment: RSS 201