4 research outputs found
Toward Adaptive Trust Calibration for Level 2 Driving Automation
Properly calibrated human trust is essential for successful interaction
between humans and automation. However, while human trust calibration can be
improved by increased automation transparency, too much transparency can
overwhelm human workload. To address this tradeoff, we present a probabilistic
framework using a partially observable Markov decision process (POMDP) for
modeling the coupled trust-workload dynamics of human behavior in an
action-automation context. We specifically consider hands-off Level 2 driving
automation in a city environment involving multiple intersections where the
human chooses whether or not to rely on the automation. We consider automation
reliability, automation transparency, and scene complexity, along with human
reliance and eye-gaze behavior, to model the dynamics of human trust and
workload. We demonstrate that our model framework can appropriately vary
automation transparency based on real-time human trust and workload belief
estimates to achieve trust calibration.Comment: 10 pages, 8 figure
Human Trust-based Feedback Control: Dynamically varying automation transparency to optimize human-machine interactions
Human trust in automation plays an essential role in interactions between
humans and automation. While a lack of trust can lead to a human's disuse of
automation, over-trust can result in a human trusting a faulty autonomous
system which could have negative consequences for the human. Therefore, human
trust should be calibrated to optimize human-machine interactions with respect
to context-specific performance objectives. In this article, we present a
probabilistic framework to model and calibrate a human's trust and workload
dynamics during his/her interaction with an intelligent decision-aid system.
This calibration is achieved by varying the automation's transparency---the
amount and utility of information provided to the human. The parameterization
of the model is conducted using behavioral data collected through human-subject
experiments, and three feedback control policies are experimentally validated
and compared against a non-adaptive decision-aid system. The results show that
human-automation team performance can be optimized when the transparency is
dynamically updated based on the proposed control policy. This framework is a
first step toward widespread design and implementation of real-time adaptive
automation for use in human-machine interactions.Comment: 21 page