46 research outputs found
A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become
an important area of focus for both researchers and practitioners. Various
approaches have been used to achieve it, such as confidence scores,
explanations, trustworthiness cues, or uncertainty communication. However, a
comprehensive understanding of the field is lacking due to the diversity of
perspectives arising from various backgrounds that influence it and the lack of
a single definition for appropriate trust. To investigate this topic, this
paper presents a systematic review to identify current practices in building
appropriate trust, different ways to measure it, types of tasks used, and
potential challenges associated with it. We also propose a Belief, Intentions,
and Actions (BIA) mapping to study commonalities and differences in the
concepts related to appropriate trust by (a) describing the existing
disagreements on defining appropriate trust, and (b) providing an overview of
the concepts and definitions related to appropriate trust in AI from the
existing literature. Finally, the challenges identified in studying appropriate
trust are discussed, and observations are summarized as current trends,
potential gaps, and research opportunities for future work. Overall, the paper
provides insights into the complex concept of appropriate trust in human-AI
interaction and presents research opportunities to advance our understanding on
this topic.Comment: 39 Page
Exploring Effectiveness of Explanations for Appropriate Trust: Lessons from Cognitive Psychology
The rapid development of Artificial Intelligence (AI) requires developers and
designers of AI systems to focus on the collaboration between humans and
machines. AI explanations of system behavior and reasoning are vital for
effective collaboration by fostering appropriate trust, ensuring understanding,
and addressing issues of fairness and bias. However, various contextual and
subjective factors can influence an AI system explanation's effectiveness. This
work draws inspiration from findings in cognitive psychology to understand how
effective explanations can be designed. We identify four components to which
explanation designers can pay special attention: perception, semantics, intent,
and user & context. We illustrate the use of these four explanation components
with an example of estimating food calories by combining text with visuals,
probabilities with exemplars, and intent communication with both user and
context in mind. We propose that the significant challenge for effective AI
explanations is an additional step between explanation generation using
algorithms not producing interpretable explanations and explanation
communication. We believe this extra step will benefit from carefully
considering the four explanation components outlined in our work, which can
positively affect the explanation's effectiveness.Comment: 2022 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX
How should a virtual agent present psychoeducation?
BACKGROUND AND OBJECTIVE: With the rise of autonomous e-mental health applications, virtual agents can play a major role in improving trustworthiness, therapy outcome and adherence. In these applications, it is important that patients adhere in the sense that they perform the tasks, but also that they adhere to the specific recommendations on how to do them well. One important construct in improving adherence is psychoeducation, information on the why and how of therapeutic interventions. In an e-mental health context, this can be delivered in two different ways: verbally by a (virtual) embodied conversational agent or just via text on the scree
Considering patient safety in autonomous e-mental health systems - detecting risk situations and referring patients back to human care
Background: Digital health interventions can fill gaps in mental healthcare provision. However, autonomous e-mental health (AEMH) systems also present challenges for effective risk management. To balance autonomy and safety, AEMH systems need to detect risk situations and act on these appropriately. One option is sending automatic alerts to carers, but such 'auto-referral' could lead to missed cases or false alerts. Requiring users to actively self-refer offers an alternative, but this can also be risky as it relies on their motivation to do so. This study set out with two objectives. Firstly, to develop guidelines for risk detection and auto-referral systems. Secondly, to understand how persuasive techniques, mediated by a virtual agent, can facilitate self-referral. Methods: In a formative phase, interviews with experts, alongside a literature review, were used to develop a risk detection protocol. Two referral protocols were developed - one involving auto-referral, the other motivating users to self-refer. This latter was tested via crowd-sourcing (n = 160). Participants were asked to imagine they had sleeping problems with differing severity and user stance on seeking help. They then chatted with a virtual agent, who either directly facilitated referral, tried to persuade the user, or accepted that they did not want help. After the conversation, participants rated their intention to self-refer, to chat with the agent again, and their feeling of being heard by the agent. Results: Whether the virtual agent facilitated, persuaded or accepted, influenced all of these measures. Users who were initially negative or doubtful about self-referral could be persuaded. For users who were initially positive about seeking human care, this persuasion did not affect their intentions, indicating that a simply facilitating referral without persuasion was sufficient. Conclusion: This paper presents a protocol that elucidates the steps and decisions involved in risk detection, something that is relevant for all types of AEMH systems. In the case of self-referral, our study shows that a virtual agent can increase users' intention to self-refer. Moreover, the strategy of the agent influenced the intentions of the user afterwards. This highlights the importance of a personalised approach to promote the user's access to appropriate care.Interactive Intelligenc
Piecing Together the Puzzle: Understanding Trust in Human-AI Teams (Short Paper)
With the increasing adoption of Artificial intelligence (AI) as a crucial component of business strategy, establishing trust between humans and AI teammates remains a key issue. The project âWe are in this togetherâ highlights current theories on trust in Human-AI teams (HAIT) and proposes a research model that integrates insights from Industrial and Organizational Psychology, Human Factors Engineering, Human-Computer Interaction, and Computer Science. The proposed model suggests that in HAIT, trust involves multiple actors and is critical for team success. We present three main propositions for understanding trust in HAIT collaboration, focused on trustworthiness and trustworthiness reactions in interpersonal relationships between humans and AI teammates. We further suggest that individual, technological, and environmental factors impact trust relationships in HAIT. The project aims to contribute in developing effective HAIT by proposing a research model of trust in HAI
Exploring the effect of automation failure on the humanâs trustworthiness in human-agent teamwork
Introduction: Collaboration in teams composed of both humans and automation has an interdependent nature, which demands calibrated trust among all the team members. For building suitable autonomous teammates, we need to study how trust and trustworthiness function in such teams. In particular, automation occasionally fails to do its job, which leads to a decrease in a humanâs trust. Research has found interesting effects of such a reduction of trust on the humanâs trustworthiness, i.e., human characteristics that make them more or less reliable. This paper investigates how automation failure in a human-automation collaborative scenario affects the humanâs trust in the automation, as well as a humanâs trustworthiness towards the automation.Methods: We present a 2 Ă 2 mixed design experiment in which the participants perform a simulated task in a 2D grid-world, collaborating with an automation in a âmoving-outâ scenario. During the experiment, we measure the participantsâ trustworthiness, trust, and liking regarding the automation, both subjectively and objectively.Results: Our results show that automation failure negatively affects the humanâs trustworthiness, as well as their trust in and liking of the automation.Discussion: Learning the effects of automation failure in trust and trustworthiness can contribute to a better understanding of the nature and dynamics of trust in these teams and improving human-automation teamwork
How should a virtual agent present psychoeducation? - Appendix part I - Psychoeducation
Text of psychoeducation as used in the experiment described in the paper: How should a virtual agent present psychoeducation? Tielman et al