538 research outputs found
Utilising Explanations to Mitigate Robot Conversational Failures
This paper presents an overview of robot failure detection work from HRI and
adjacent fields using failures as an opportunity to examine robot explanation
behaviours. As humanoid robots remain experimental tools in the early 2020s,
interactions with robots are situated overwhelmingly in controlled
environments, typically studying various interactional phenomena. Such
interactions suffer from real-world and large-scale experimentation and tend to
ignore the 'imperfectness' of the everyday user. Robot explanations can be used
to approach and mitigate failures, by expressing robot legibility and
incapability, and within the perspective of common-ground. In this paper, I
discuss how failures present opportunities for explanations in interactive
conversational robots and what the potentials are for the intersection of HRI
and explainability research
Recommended from our members
Optimizing for Robot Transparency
As robots become more capable and commonplace, it becomes increasingly important that they are transparent to humans. People need to have accurate mental models of a robot, so that they can anticipate what it will do, know when and where not to rely it, and understand why it failed. This helps engineers ensure safety and robustness of the robot systems they develop, and enables human end-users to interact more safely and seamlessly with robots.This thesis introduces a framework for producing robot behavior that increases transparency. Our key insight is that a robot's actions do not just influence the physical world; they also inevitably influence a human observer's mental model of the robot. We attempt to model the latter---how humans might make inferences about a robot's objectives, policy, and capabilities from observations of its behavior---so that we can then present examples of robot behavior that optimally bring the human's understanding closer to the true robot model. In this way, our framework casts transparency as an optimization problem.Part I introduces our framework of optimizing for robot transparency, and applies it in three ways: communicating a robot's objectives, which situations it can handle, and why it is incapable of performing a task. Part II investigates how transparency is useful not just for safe and seamless interaction, but also for learning. When humans teach a robot, giving human teachers transparency regarding what the robot has learned so far makes it easier for them to select informative teaching examples
Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning
In this work, we present a new planning formalism called Expectation-Aware
planning for decision making with humans in the loop where the human's
expectations about an agent may differ from the agent's own model. We show how
this formulation allows agents to not only leverage existing strategies for
handling model differences but can also exhibit novel behaviors that are
generated through the combination of these different strategies. Our
formulation also reveals a deep connection to existing approaches in epistemic
planning. Specifically, we show how we can leverage classical planning
compilations for epistemic planning to solve Expectation-Aware planning
problems. To the best of our knowledge, the proposed formulation is the first
complete solution to decision-making in the presence of diverging user
expectations that is amenable to a classical planning compilation while
successfully combining previous works on explanation and explicability. We
empirically show how our approach provides a computational advantage over
existing approximate approaches that unnecessarily try to search in the space
of models while also failing to facilitate the full gamut of behaviors enabled
by our framework
Teaching Robots to Span the Space of Functional Expressive Motion
Our goal is to enable robots to perform functional tasks in emotive ways, be
it in response to their users' emotional states, or expressive of their
confidence levels. Prior work has proposed learning independent cost functions
from user feedback for each target emotion, so that the robot may optimize it
alongside task and environment specific objectives for any situation it
encounters. However, this approach is inefficient when modeling multiple
emotions and unable to generalize to new ones. In this work, we leverage the
fact that emotions are not independent of each other: they are related through
a latent space of Valence-Arousal-Dominance (VAD). Our key idea is to learn a
model for how trajectories map onto VAD with user labels. Considering the
distance between a trajectory's mapping and a target VAD allows this single
model to represent cost functions for all emotions. As a result 1) all user
feedback can contribute to learning about every emotion; 2) the robot can
generate trajectories for any emotion in the space instead of only a few
predefined ones; and 3) the robot can respond emotively to user-generated
natural language by mapping it to a target VAD. We introduce a method that
interactively learns to map trajectories to this latent space and test it in
simulation and in a user study. In experiments, we use a simple vacuum robot as
well as the Cassie biped
- …