326 research outputs found
Explainable Planning
As AI is increasingly being adopted into application solutions, the challenge
of supporting interaction with humans is becoming more apparent. Partly this is
to support integrated working styles, in which humans and intelligent systems
cooperate in problem-solving, but also it is a necessary step in the process of
building trust as humans migrate greater responsibility to such systems. The
challenge is to find effective ways to communicate the foundations of AI-driven
behaviour, when the algorithms that drive it are far from transparent to
humans. In this paper we consider the opportunities that arise in AI planning,
exploiting the model-based representations that form a familiar and common
basis for communication with users, while acknowledging the gap between
planning algorithms and human problem-solving.Comment: Presented at the IJCAI-17 workshop on Explainable AI
(http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/). Melbourne,
August 201
Enabling Robots to Communicate their Objectives
The overarching goal of this work is to efficiently enable end-users to
correctly anticipate a robot's behavior in novel situations. Since a robot's
behavior is often a direct result of its underlying objective function, our
insight is that end-users need to have an accurate mental model of this
objective function in order to understand and predict what the robot will do.
While people naturally develop such a mental model over time through observing
the robot act, this familiarization process may be lengthy. Our approach
reduces this time by having the robot model how people infer objectives from
observed behavior, and then it selects those behaviors that are maximally
informative. The problem of computing a posterior over objectives from observed
behavior is known as Inverse Reinforcement Learning (IRL), and has been applied
to robots learning human objectives. We consider the problem where the roles of
human and robot are swapped. Our main contribution is to recognize that unlike
robots, humans will not be exact in their IRL inference. We thus introduce two
factors to define candidate approximate-inference models for human learning in
this setting, and analyze them in a user study in the autonomous driving
domain. We show that certain approximate-inference models lead to the robot
generating example behaviors that better enable users to anticipate what it
will do in novel situations. Our results also suggest, however, that additional
research is needed in modeling how humans extrapolate from examples of robot
behavior.Comment: RSS 201
- …