3 research outputs found
Progressive Explanation Generation for Human-robot Teaming
Generating explanation to explain its behavior is an essential capability for
a robotic teammate. Explanations help human partners better understand the
situation and maintain trust of their teammates. Prior work on robot generating
explanations focuses on providing the reasoning behind its decision making.
These approaches, however, fail to heed the cognitive requirement of
understanding an explanation. In other words, while they provide the right
explanations from the explainer's perspective, the explainee part of the
equation is ignored. In this work, we address an important aspect along this
direction that contributes to a better understanding of a given explanation,
which we refer to as the progressiveness of explanations. A progressive
explanation improves understanding by limiting the cognitive effort required at
each step of making the explanation. As a result, such explanations are
expected to be smoother and hence easier to understand. A general formulation
of progressive explanation is presented. Algorithms are provided based on
several alternative quantifications of cognitive effort as an explanation is
being made, which are evaluated in a standard planning competition domain
Online Explanation Generation for Human-Robot Teaming
As AI becomes an integral part of our lives, the development of explainable
AI, embodied in the decision-making process of an AI or robotic agent, becomes
imperative. For a robotic teammate, the ability to generate explanations to
justify its behavior is one of the key requirements of explainable agency.
Prior work on explanation generation has been focused on supporting the
rationale behind the robot's decision or behavior. These approaches, however,
fail to consider the mental demand for understanding the received explanation.
In other words, the human teammate is expected to understand an explanation no
matter how much information is presented. In this work, we argue that
explanations, especially those of a complex nature, should be made in an online
fashion during the execution, which helps spread out the information to be
explained and thus reduce the mental workload of humans in highly cognitive
demanding tasks. However, a challenge here is that the different parts of an
explanation may be dependent on each other, which must be taken into account
when generating online explanations. To this end, a general formulation of
online explanation generation is presented with three variations satisfying
different "online" properties. The new explanation generation methods are based
on a model reconciliation setting introduced in our prior work. We evaluated
our methods both with human subjects in a simulated rover domain, using NASA
Task Load Index (TLX), and synthetically with ten different problems across two
standard IPC domains. Results strongly suggest that our methods generate
explanations that are perceived as less cognitively demanding and much
preferred over the baselines and are computationally efficient
Order Matters: Generating Progressive Explanations for Planning Tasks in Human-Robot Teaming
Prior work on generating explanations has been focused on providing the
rationale behind the robot's decision making. While these approaches provide
the right explanations from the explainer's perspective, they fail to heed the
cognitive requirement of understanding an explanation from the explainee's
perspective. In this work, we set out to address this issue from a planning
context by considering the order of information provided in an explanation,
which is referred to as the progressiveness of explanations. Progressive
explanations contribute to a better understanding by minimizing the cumulative
cognitive effort required for understanding all the information in an
explanation. As a result, such explanations are easier to understand. Given the
sequential nature of communicating information, a general formulation based on
goal-based Markov Decision Processes for generating progressive explanation is
presented. The reward function of this MDP is learned via inverse reinforcement
learning based on explanations that are provided by human subjects. Our method
is evaluated in an escape-room domain. The results show that our progressive
explanation generation method reduces the cognitive load over two baselines.Comment: arXiv admin note: text overlap with arXiv:1902.0060