288 research outputs found

    Assisting Human Decisions in Document Matching

    Full text link
    Many practical applications, ranging from paper-reviewer assignment in peer review to job-applicant matching for hiring, require human decision makers to identify relevant matches by combining their expertise with predictions from machine learning models. In many such model-assisted document matching tasks, the decision makers have stressed the need for assistive information about the model outputs (or the data) to facilitate their decisions. In this paper, we devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance (in terms of accuracy and time). Through a crowdsourced (N=271 participants) study, we find that providing black-box model explanations reduces users' accuracy on the matching task, contrary to the commonly-held belief that they can be helpful by allowing better understanding of the model. On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance. Surprisingly, we also find that the users' perceived utility of assistive information is misaligned with their objective utility (measured through their task performance)

    Design and Evaluation of User-Centered Explanations for Machine Learning Model Predictions in Healthcare

    Get PDF
    Challenges in interpreting some high-performing models present complications in applying machine learning (ML) techniques to healthcare problems. Recently, there has been rapid growth in research on model interpretability; however, approaches to explaining complex ML models are rarely informed by end-user needs and user evaluations of model interpretability are lacking, especially in healthcare. This makes it challenging to determine what explanation approaches might enable providers to understand model predictions in a comprehensible and useful way. Therefore, I aimed to utilize clinician perspectives to inform the design of explanations for ML-based prediction tools and improve the adoption of these systems in practice. In this dissertation, I proposed a new theoretical framework for designing user-centered explanations for ML-based systems. I then utilized the framework to propose explanation designs for predictions from a pediatric in-hospital mortality risk model. I conducted focus groups with healthcare providers to obtain feedback on the proposed designs, which was used to inform the design of a user-centered explanation. The user-centered explanation was evaluated in a laboratory study to assess its effect on healthcare provider perceptions of the model and decision-making processes. The results demonstrated that the user-centered explanation design improved provider perceptions of utilizing the predictive model in practice, but exhibited no significant effect on provider accuracy, confidence, or efficiency in making decisions. Limitations of the evaluation study design, including a small sample size, may have affected the ability to detect an impact on decision-making. Nonetheless, the predictive model with the user-centered explanation was positively received by healthcare providers, and demonstrated a viable approach to explaining ML model predictions in healthcare. Future work is required to address the limitations of this study and further explore the potential benefits of user-centered explanation designs for predictive models in healthcare. This work contributes a new theoretical framework for user-centered explanation design for ML-based systems that is generalizable outside the domain of healthcare. Moreover, the work provides meaningful insights into the role of model interpretability and explanation in healthcare while advancing the discussion on how to effectively communicate ML model information to healthcare providers

    Enhancing Fraud Detection Through Interpretable Machine Learning

    Get PDF

    Eliciting Expertise

    No full text
    Since the last edition of this book there have been rapid developments in the use and exploitation of formally elicited knowledge. Previously, (Shadbolt and Burton, 1995) the emphasis was on eliciting knowledge for the purpose of building expert or knowledge-based systems. These systems are computer programs intended to solve real-world problems, achieving the same level of accuracy as human experts. Knowledge engineering is the discipline that has evolved to support the whole process of specifying, developing and deploying knowledge-based systems (Schreiber et al., 2000) This chapter will discuss the problem of knowledge elicitation for knowledge intensive systems in general

    Capturing Users’ Reality: A Novel Approach to Generate Coherent Counterfactual Explanations

    Get PDF
    The opacity of Artificial Intelligence (AI) systems is a major impediment to their deployment. Explainable AI (XAI) methods that automatically generate counterfactual explanations for AI decisions can increase users’ trust in AI systems. Coherence is an essential property of explanations but is not yet addressed sufficiently by existing XAI methods. We design a novel optimization-based approach to generate coherent counterfactual explanations, which is applicable to numerical, categorical, and mixed data. We demonstrate the approach in a realistic setting and assess its efficacy in a human-grounded evaluation. Results suggest that our approach produces explanations that are perceived as coherent as well as suitable to explain the factual situation
    • …
    corecore