9 research outputs found

    A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction

    Full text link
    Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners. Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communication. However, a comprehensive understanding of the field is lacking due to the diversity of perspectives arising from various backgrounds that influence it and the lack of a single definition for appropriate trust. To investigate this topic, this paper presents a systematic review to identify current practices in building appropriate trust, different ways to measure it, types of tasks used, and potential challenges associated with it. We also propose a Belief, Intentions, and Actions (BIA) mapping to study commonalities and differences in the concepts related to appropriate trust by (a) describing the existing disagreements on defining appropriate trust, and (b) providing an overview of the concepts and definitions related to appropriate trust in AI from the existing literature. Finally, the challenges identified in studying appropriate trust are discussed, and observations are summarized as current trends, potential gaps, and research opportunities for future work. Overall, the paper provides insights into the complex concept of appropriate trust in human-AI interaction and presents research opportunities to advance our understanding on this topic.Comment: 39 Page

    Exploring Effectiveness of Explanations for Appropriate Trust: Lessons from Cognitive Psychology

    Full text link
    The rapid development of Artificial Intelligence (AI) requires developers and designers of AI systems to focus on the collaboration between humans and machines. AI explanations of system behavior and reasoning are vital for effective collaboration by fostering appropriate trust, ensuring understanding, and addressing issues of fairness and bias. However, various contextual and subjective factors can influence an AI system explanation's effectiveness. This work draws inspiration from findings in cognitive psychology to understand how effective explanations can be designed. We identify four components to which explanation designers can pay special attention: perception, semantics, intent, and user & context. We illustrate the use of these four explanation components with an example of estimating food calories by combining text with visuals, probabilities with exemplars, and intent communication with both user and context in mind. We propose that the significant challenge for effective AI explanations is an additional step between explanation generation using algorithms not producing interpretable explanations and explanation communication. We believe this extra step will benefit from carefully considering the four explanation components outlined in our work, which can positively affect the explanation's effectiveness.Comment: 2022 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX

    How should a virtual agent present psychoeducation?

    Get PDF
    BACKGROUND AND OBJECTIVE: With the rise of autonomous e-mental health applications, virtual agents can play a major role in improving trustworthiness, therapy outcome and adherence. In these applications, it is important that patients adhere in the sense that they perform the tasks, but also that they adhere to the specific recommendations on how to do them well. One important construct in improving adherence is psychoeducation, information on the why and how of therapeutic interventions. In an e-mental health context, this can be delivered in two different ways: verbally by a (virtual) embodied conversational agent or just via text on the scree

    Exploring the effect of automation failure on the human’s trustworthiness in human-agent teamwork

    Get PDF
    Introduction: Collaboration in teams composed of both humans and automation has an interdependent nature, which demands calibrated trust among all the team members. For building suitable autonomous teammates, we need to study how trust and trustworthiness function in such teams. In particular, automation occasionally fails to do its job, which leads to a decrease in a human’s trust. Research has found interesting effects of such a reduction of trust on the human’s trustworthiness, i.e., human characteristics that make them more or less reliable. This paper investigates how automation failure in a human-automation collaborative scenario affects the human’s trust in the automation, as well as a human’s trustworthiness towards the automation.Methods: We present a 2 × 2 mixed design experiment in which the participants perform a simulated task in a 2D grid-world, collaborating with an automation in a “moving-out” scenario. During the experiment, we measure the participants’ trustworthiness, trust, and liking regarding the automation, both subjectively and objectively.Results: Our results show that automation failure negatively affects the human’s trustworthiness, as well as their trust in and liking of the automation.Discussion: Learning the effects of automation failure in trust and trustworthiness can contribute to a better understanding of the nature and dynamics of trust in these teams and improving human-automation teamwork

    Abstract Argumentation for Hybrid Intelligence Scenarios

    No full text
    Hybrid Intelligence (HI) is the combination of human and machine intelligence, expanding human intellect instead of replacing it. Information in HI scenarios is often inconsistent, e.g. due to shifting preferences, user's motivation or conflicts arising from merged data. As it provides an intuitive mechanism for reasoning with conflicting information, with natural explanations that are understandable to humans, our hypothesis is that Dung's Abstract Argumentation (AA) is a suitable formalism for such hybrid scenarios. This paper investigates the capabilities of Argumentation in representing and reasoning in the presence of inconsistency, and its potential for intuitive explainability to link between artificial and human actors. To this end, we conduct a survey among a number of research projects of the Hybrid Intelligence Centre. Within these projects we analyse the applicability of argumentation with respect to various inconsistency types stemming, for instance, from commonsense reasoning, decision making, and negotiation. The results show that 14 out of the 21 projects have to deal with inconsistent information. In half of those scenarios, the knowledge models come with natural preference relations over the information. We show that Argumentation is a suitable framework to model the specific knowledge in 10 out of 14 projects, thus indicating the potential of Abstract Argumentation for transparently dealing with inconsistencies in Hybrid Intelligence systems

    HyEnA: A Hybrid Method for Extracting Arguments from Opinions (BEST PAPER AWARD)

    No full text
    The key arguments underlying a large and noisy set of opinions help understand the opinions quickly and accurately. Fully automated methods can extract arguments but (1) require large labeled datasets and (2) work well for known viewpoints, but not for novel points of view. We propose HyEnA, a hybrid (human + AI) method for extracting arguments from opinionated texts, combining the speed of automated processing with the understanding and reasoning capabilities of humans. We evaluate HyEnA on three feedback corpora. We find that, on the one hand, HyEnA achieves higher coverage and precision than a state-of-the-art automated method, when compared on a common set of diverse opinions, justifying the need for human insight. On the other hand, HyEnA requires less human effort and does not compromise quality compared to (fully manual) expert analysis, demonstrating the benefit of combining human and machine intelligence
    corecore