1,010 research outputs found

    Individual differences in sensitivity to visuomotor discrepancies

    Get PDF
    This study explored whether sensitivity to visuomotor discrepancies, specifically the ability to detect and respond to loss of control over a moving object, is associated with other psychological traits and abilities. College-aged adults performed a computerized tracking task which involved keeping a cursor centered on a moving target using keyboard controls. On some trials, the cursor became unresponsive to participants’ keypresses. Participants were instructed to immediately press the space bar if they noticed a loss of control. Response times (RTs) were measured. Additionally, participants completed a battery of behavioral and questionnaire-based tests with hypothesized relationships to the phenomenology of control, including measures of constructs such as locus of control, impulsiveness, need for cognition (NFC), and non-clinical schizotypy. Bivariate correlations between RTs to loss of control and high order cognitive and personality traits were not significant. However, a step-wise regression showed that better performance on the pursuit rotor task predicted faster RTs to loss of control while controlling for age, signal detection, and NFC. Results are discussed in relation to multifactorial models of the sense of agency

    Examining the effect of explanation on satisfaction and trust in AI diagnostic systems

    Get PDF
    Background: Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a specialist, allowing health care professionals to focus on more challenging and critical aspects of treatment. But for AI systems to succeed in this role, it will not be enough for them to merely provide accurate diagnoses and predictions. In addition, it will need to provide explanations (both to physicians and patients) about why the diagnoses are made. Without this, accurate and correct diagnoses and treatments might otherwise be ignored or rejected. Method: It is important to evaluate the effectiveness of these explanations and understand the relative effectiveness of different kinds of explanations. In this paper, we examine this problem across two simulation experiments. For the first experiment, we tested a re-diagnosis scenario to understand the effect of local and global explanations. In a second simulation experiment, we implemented different forms of explanation in a similar diagnosis scenario. Results: Results show that explanation helps improve satisfaction measures during the critical re-diagnosis period but had little effect before re-diagnosis (when initial treatment was taking place) or after (when an alternate diagnosis resolved the case successfully). Furthermore, initial “global” explanations about the process had no impact on immediate satisfaction but improved later judgments of understanding about the AI. Results of the second experiment show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. As in Experiment 1, these explanations had their effect primarily on immediate measures of satisfaction during the re-diagnosis crisis, with little advantage prior to re-diagnosis or once the diagnosis was successfully resolved. Conclusion: These two studies help us to draw several conclusions about how patient-facing explanatory diagnostic systems may succeed or fail. Based on these studies and the review of the literature, we will provide some design recommendations for the explanations offered for AI systems in the healthcare domain

    Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance

    Get PDF
    If a user is presented an AI system that portends to explain how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI? This question entails some key concepts of measurement such as explanation goodness and trust. We present methods for enabling developers and researchers to: (1) Assess the a priori goodness of explanations, (2) Assess users\u27 satisfaction with explanations, (3) Reveal user\u27s mental model of an AI system, (4) Assess user\u27s curiosity or need for explanations, (5) Assess whether the user\u27s trust and reliance on the AI are appropriate, and finally, (6) Assess how the human-XAI work system performs. The methods we present derive from our integration of extensive research literatures and our own psychometric evaluations. We point to the previous research that led to the measurement scales which we aggregated and tailored specifically for the XAI context. Scales are presented in sufficient detail to enable their use by XAI researchers. For Mental Model assessment and Work System Performance, XAI researchers have choices. We point to a number of methods, expressed in terms of methods\u27 strengths and weaknesses, and pertinent measurement issues
    • …
    corecore