2,647 research outputs found
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI
research. A better understanding of the needs of XAI users, as well as
human-centered evaluations of explainable models are both a necessity and a
challenge. In this paper, we explore how HCI and AI researchers conduct user
studies in XAI applications based on a systematic literature review. After
identifying and thoroughly analyzing 97core papers with human-based XAI
evaluations over the past five years, we categorize them along the measured
characteristics of explanatory methods, namely trust, understanding, usability,
and human-AI collaboration performance. Our research shows that XAI is
spreading more rapidly in certain application domains, such as recommender
systems than in others, but that user evaluations are still rather sparse and
incorporate hardly any insights from cognitive or social sciences. Based on a
comprehensive discussion of best practices, i.e., common models, design
choices, and measures in user studies, we propose practical guidelines on
designing and conducting user studies for XAI researchers and practitioners.
Lastly, this survey also highlights several open research directions,
particularly linking psychological science and human-centered XAI
Design and Evaluation of User-Centered Explanations for Machine Learning Model Predictions in Healthcare
Challenges in interpreting some high-performing models present complications in applying machine learning (ML) techniques to healthcare problems. Recently, there has been rapid growth in research on model interpretability; however, approaches to explaining complex ML models are rarely informed by end-user needs and user evaluations of model interpretability are lacking, especially in healthcare. This makes it challenging to determine what explanation approaches might enable providers to understand model predictions in a comprehensible and useful way. Therefore, I aimed to utilize clinician perspectives to inform the design of explanations for ML-based prediction tools and improve the adoption of these systems in practice.
In this dissertation, I proposed a new theoretical framework for designing user-centered explanations for ML-based systems. I then utilized the framework to propose explanation designs for predictions from a pediatric in-hospital mortality risk model. I conducted focus groups with healthcare providers to obtain feedback on the proposed designs, which was used to inform the design of a user-centered explanation. The user-centered explanation was evaluated in a laboratory study to assess its effect on healthcare provider perceptions of the model and decision-making processes.
The results demonstrated that the user-centered explanation design improved provider perceptions of utilizing the predictive model in practice, but exhibited no significant effect on provider accuracy, confidence, or efficiency in making decisions. Limitations of the evaluation study design, including a small sample size, may have affected the ability to detect an impact on decision-making. Nonetheless, the predictive model with the user-centered explanation was positively received by healthcare providers, and demonstrated a viable approach to explaining ML model predictions in healthcare. Future work is required to address the limitations of this study and further explore the potential benefits of user-centered explanation designs for predictive models in healthcare.
This work contributes a new theoretical framework for user-centered explanation design for ML-based systems that is generalizable outside the domain of healthcare. Moreover, the work provides meaningful insights into the role of model interpretability and explanation in healthcare while advancing the discussion on how to effectively communicate ML model information to healthcare providers
Explaining Explainable Artificial Intelligence: An integrative model of objective and subjective influences on XAI
Explainable artificial intelligence (XAI) is a new field within artificial intelligence (AI) and machine learning (ML). XAI offers a transparency of AI and ML that can bridge the gap in information that has been absent from “black-box” ML models. Given its nascency, there are several taxonomies of XAI in the literature. The current paper incorporates the taxonomies in the literature into one unifying framework, which defines the types of explanations, types of transparency, and model methods that together inform the user’s processes towards developing trust in AI and ML systems
User Feedback in Controllable and Explainable Social Recommender Systems: a Linguistic Analysis
Controllable and explainable intelligent user interfaces have been used to provide transparent recommendations. Many researchers have explored interfaces that support user control and provide explanations of the recommendation process and models. To extend the works to real-world decision-making scenarios, we need to understand further the users’ mental models of the enhanced system components. In this paper, we make a step in this direction by investigating a free form feedback left by users of social recommender systems to specify the reasons of selecting prompted social recommendations. With a user study involving 50 subjects (N=50), we present the linguistic changes in using controllable and explainable interfaces for a social information-seeking task. Based on our findings, we discuss design implications for controllable and explainable recommender systems
- …