88 research outputs found
Modeling Dispositional and Initial learned Trust in Automated Vehicles with Predictability and Explainability
Technological advances in the automotive industry are bringing automated
driving closer to road use. However, one of the most important factors
affecting public acceptance of automated vehicles (AVs) is the public's trust
in AVs. Many factors can influence people's trust, including perception of
risks and benefits, feelings, and knowledge of AVs. This study aims to use
these factors to predict people's dispositional and initial learned trust in
AVs using a survey study conducted with 1175 participants. For each
participant, 23 features were extracted from the survey questions to capture
his or her knowledge, perception, experience, behavioral assessment, and
feelings about AVs. These features were then used as input to train an eXtreme
Gradient Boosting (XGBoost) model to predict trust in AVs. With the help of
SHapley Additive exPlanations (SHAP), we were able to interpret the trust
predictions of XGBoost to further improve the explainability of the XGBoost
model. Compared to traditional regression models and black-box machine learning
models, our findings show that this approach was powerful in providing a high
level of explainability and predictability of trust in AVs, simultaneously
May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability
Research in explainable AI (XAI) aims to provide insights into the
decision-making process of opaque AI models. To date, most XAI methods offer
one-off and static explanations, which cannot cater to the diverse backgrounds
and understanding levels of users. With this paper, we investigate if free-form
conversations can enhance users' comprehension of static explanations, improve
acceptance and trust in the explanation methods, and facilitate human-AI
collaboration. Participants are presented with static explanations, followed by
a conversation with a human expert regarding the explanations. We measure the
effect of the conversation on participants' ability to choose, from three
machine learning models, the most accurate one based on explanations and their
self-reported comprehension, acceptance, and trust. Empirical results show that
conversations significantly improve comprehension, acceptance, trust, and
collaboration. Our findings highlight the importance of customized model
explanations in the format of free-form conversations and provide insights for
the future design of conversational explanations
Psychophysiological responses to takeover requests in conditionally automated driving
In SAE Level 3 automated driving, taking over control from automation raises significant safety concerns because drivers out of the vehicle control loop have difficulty negotiating takeover transitions. Existing studies on takeover transitions have focused on drivers' behavioral responses to takeover requests (TORs). As a complement, this exploratory study aimed to examine drivers' psychophysiological responses to TORs as a result of varying non-driving-related tasks (NDRTs), traffic density and TOR lead time. A total number of 102 drivers were recruited and each of them experienced 8 takeover events in a high fidelity fixed-base driving simulator. Drivers' gaze behaviors, heart rate (HR) activities, galvanic skin responses (GSRs), and facial expressions were recorded and analyzed during two stages.
First, during the automated driving stage, we found that drivers had lower heart rate variability, narrower horizontal gaze dispersion, and shorter eyes-on-road time when they had a high level of cognitive load relative to a low level of cognitive load. Second, during the takeover transition stage, 4s lead time led to inhibited blink numbers and larger maximum and mean GSR phasic activation compared to 7s lead time, whilst heavy traffic density resulted in increased HR acceleration patterns than light traffic density. Our results showed that psychophysiological measures can indicate specific internal states of drivers, including their workload, emotions, attention, and situation awareness in a continuous, non-invasive and real-time manner. The findings provide additional support for the value of using psychophysiological measures in automated driving and for future applications in driver monitoring systems and adaptive alert systems.University of Michigan McityPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162593/1/AAP_physiological_responses_HF_template.pdfSEL
Reward Shaping for Building Trustworthy Robots in Sequential Human-Robot Interaction
Trust-aware human-robot interaction (HRI) has received increasing research
attention, as trust has been shown to be a crucial factor for effective HRI.
Research in trust-aware HRI discovered a dilemma -- maximizing task rewards
often leads to decreased human trust, while maximizing human trust would
compromise task performance. In this work, we address this dilemma by
formulating the HRI process as a two-player Markov game and utilizing the
reward-shaping technique to improve human trust while limiting performance
loss. Specifically, we show that when the shaping reward is potential-based,
the performance loss can be bounded by the potential functions evaluated at the
final states of the Markov game. We apply the proposed framework to the
experience-based trust model, resulting in a linear program that can be
efficiently solved and deployed in real-world applications. We evaluate the
proposed framework in a simulation scenario where a human-robot team performs a
search-and-rescue mission. The results demonstrate that the proposed framework
successfully modifies the robot's optimal policy, enabling it to increase human
trust at a minimal task performance cost.Comment: In Proceedings of 2023 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS
Enabling Team of Teams: A Trust Inference and Propagation (TIP) Model in Multi-Human Multi-Robot Teams
Trust has been identified as a central factor for effective human-robot
teaming. Existing literature on trust modeling predominantly focuses on dyadic
human-autonomy teams where one human agent interacts with one robot. There is
little, if not no, research on trust modeling in teams consisting of multiple
human agents and multiple robotic agents.
To fill this research gap, we present the trust inference and propagation
(TIP) model for trust modeling in multi-human multi-robot teams. In a
multi-human multi-robot team, we postulate that there exist two types of
experiences that a human agent has with a robot: direct and indirect
experiences. The TIP model presents a novel mathematical framework that
explicitly accounts for both types of experiences. To evaluate the model, we
conducted a human-subject experiment with 15 pairs of participants ().
Each pair performed a search and detection task with two drones. Results show
that our TIP model successfully captured the underlying trust dynamics and
significantly outperformed a baseline model. To the best of our knowledge, the
TIP model is the first mathematical framework for computational trust modeling
in multi-human multi-robot teams.Comment: In Proceedings of Robotics: Science and Systems, 2023, Daegu, Korea.
arXiv admin note: text overlap with arXiv:2301.1092
- …