14,198 research outputs found

    Leveraging Rationales to Improve Human Task Performance

    Full text link
    Machine learning (ML) systems across many application areas are increasingly demonstrating performance that is beyond that of humans. In response to the proliferation of such models, the field of Explainable AI (XAI) has sought to develop techniques that enhance the transparency and interpretability of machine learning methods. In this work, we consider a question not previously explored within the XAI and ML communities: Given a computational system whose performance exceeds that of its human user, can explainable AI capabilities be leveraged to improve the performance of the human? We study this question in the context of the game of Chess, for which computational game engines that surpass the performance of the average player are widely available. We introduce the Rationale-Generating Algorithm, an automated technique for generating rationales for utility-based computational methods, which we evaluate with a multi-day user study against two baselines. The results show that our approach produces rationales that lead to statistically significant improvement in human task performance, demonstrating that rationales automatically generated from an AI's internal task model can be used not only to explain what the system is doing, but also to instruct the user and ultimately improve their task performance.Comment: ACM IUI 202

    Innovations in Medical Image Analysis and Explainable AI for Transparent Clinical Decision Support Systems

    Get PDF
    This thesis explores innovative methods designed to assist clinicians in their everyday practice, with a particular emphasis on Medical Image Analysis and Explainability issues. The main challenge lies in interpreting the knowledge gained from machine learning algorithms, also called black-boxes, to provide transparent clinical decision support systems for real integration into clinical practice. For this reason, all work aims to exploit Explainable AI techniques to study and interpret the trained models. Given the countless open problems for the development of clinical decision support systems, the project includes the analysis of various data and pathologies. The main works are focused on the most threatening disease afflicting the female population: Breast Cancer. The works aim to diagnose and classify breast cancer through medical images by taking advantage of a first-level examination such as Mammography screening, Ultrasound images, and a more advanced examination such as MRI. Papers on Breast Cancer and Microcalcification Classification demonstrated the potential of shallow learning algorithms in terms of explainability and accuracy when intelligible radiomic features are used. Conversely, the union of deep learning and Explainable AI methods showed impressive results for Breast Cancer Detection. The local explanations provided via saliency maps were critical for model introspection, as well as increasing performance. To increase trust in these systems and aspire to their real use, a multi-level explanation was proposed. Three main stakeholders who need transparent models have been identified: developers, physicians, and patients. For this reason, guided by the enormous impact of COVID-19 in the world population, a fully Explainable machine learning model was proposed for COVID-19 Prognosis prediction exploiting the proposed multi-level explanation. It is assumed that such a system primarily requires two components: 1) inherently explainable inputs such as clinical, laboratory, and radiomic features; 2) Explainable methods capable of explaining globally and locally the trained model. The union of these two requirements allows the developer to detect any model bias, the doctor to verify the model findings with clinical evidence, and justify decisions to patients. These results were also confirmed for the study of coronary artery disease. In particular machine learning algorithms are trained using intelligible clinical and radiomic features extracted from pericoronaric adipose tissue to assess the condition of coronary arteries. Eventually, some important national and international collaborations led to the analysis of data for the development of predictive models for some neurological disorders. In particular, the predictivity of handwriting features for the prediction of depressed patients was explored. Using the training of neural networks constrained by first-order logic, it was possible to provide high-performance and explainable models, going beyond the trade-off between explainability and accuracy

    Multi-style explainable matrix factorization techniques for recommender systems.

    Get PDF
    Black-box recommender system models are machine learning models that generate personalized recommendations without explaining how the recommendations were generated to the user or giving them a way to correct wrong assumptions made about them by the model. However, compared to white-box models, which are transparent and scrutable, black-box models are generally more accurate. Recent research has shown that accuracy alone is not sufficient for user satisfaction. One such black-box model is Matrix Factorization, a State of the Art recommendation technique that is widely used due to its ability to deal with sparse data sets and to produce accurate recommendations. Recent work has proposed new Matrix Factorization models that are explainable by incorporating explanations derived from semantic knowledge graphs, user neighborhood, or item neighborhood graphs into the model learning process. These Explainable Matrix Factorization (EMF) methods have the benefit of providing explanations without sacrificing accuracy. However, their explanations tend to be limited to only one explanation style. In this dissertation, we propose a framework comprising new machine learning methods to build explainable models that can make recommendations with multiple explanation-styles, by hybridizing multiple EMF models and by proposing new EMF models that explain recommendations using tags. The various pre-calculated explainability scores, leveraged in our proposed methods, have all been validated in prior work that conducted user studies to evaluate users’ satisfaction with each style individually. Unlike most existing work that generates explanations post-hoc, i.e., after the predictions have already been made, our framework is based on calculating explainability scores directly from available data, before the model is learned, and then using them as part of a regularization mechanism, to guide the model learning. Unlike post-hoc methods, our framework makes it possible to learn machine learning models that take into account the explanation scores, therefore ensuring higher transparency. Our evaluation experiments show that our proposed methods provide accurate recommendations while also providing users with multiple styles of explanations about how data was used to generate each recommendation. Each explanation style also provides additional decision-making information that empowers the user to either trust or scrutinize the recommendations. Although, rooted in the hybrid recommendation framework, our proposed methods make a significant step forward in explainable AI and beyond existing hybrid frameworks, because the proposed hybridization mechanisms make an intentional effort to take into account the individual models’ explanations and not only their output predicted ratings

    Model Agnostic Explainable Selective Regression via Uncertainty Estimation

    Full text link
    With the wide adoption of machine learning techniques, requirements have evolved beyond sheer high performance, often requiring models to be trustworthy. A common approach to increase the trustworthiness of such systems is to allow them to refrain from predicting. Such a framework is known as selective prediction. While selective prediction for classification tasks has been widely analyzed, the problem of selective regression is understudied. This paper presents a novel approach to selective regression that utilizes model-agnostic non-parametric uncertainty estimation. Our proposed framework showcases superior performance compared to state-of-the-art selective regressors, as demonstrated through comprehensive benchmarking on 69 datasets. Finally, we use explainable AI techniques to gain an understanding of the drivers behind selective regression. We implement our selective regression method in the open-source Python package doubt and release the code used to reproduce our experiments
    • …
    corecore