1 research outputs found
Trust-Aware Decision Making for Human-Robot Collaboration: Model Learning and Planning
Trust in autonomy is essential for effective human-robot collaboration and
user adoption of autonomous systems such as robot assistants. This paper
introduces a computational model which integrates trust into robot
decision-making. Specifically, we learn from data a partially observable Markov
decision process (POMDP) with human trust as a latent variable. The trust-POMDP
model provides a principled approach for the robot to (i) infer the trust of a
human teammate through interaction, (ii) reason about the effect of its own
actions on human trust, and (iii) choose actions that maximize team performance
over the long term. We validated the model through human subject experiments on
a table-clearing task in simulation (201 participants) and with a real robot
(20 participants). In our studies, the robot builds human trust by manipulating
low-risk objects first. Interestingly, the robot sometimes fails intentionally
in order to modulate human trust and achieve the best team performance. These
results show that the trust-POMDP calibrates trust to improve human-robot team
performance over the long term. Further, they highlight that maximizing trust
alone does not always lead to the best performance.Comment: Chen and Nikolaidis contributed equally to the work. Appeared In
Proceedings of 2018 ACM/IEEE International Conference on Human-Robot
Interaction, Chicago, IL, USA, (HRI 2018