589 research outputs found

    LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations

    Full text link
    The increased interest in deep learning applications, and their hard-to-detect biases result in the need to validate and explain complex models. However, current explanation methods are limited as far as both the explanation of the reasoning process and prediction results are concerned. They usually only show the location in the image that was important for model prediction. The lack of possibility to interact with explanations makes it difficult to verify and understand exactly how the model works. This creates a significant risk when using the model. The risk is compounded by the fact that explanations do not take into account the semantic meaning of the explained objects. To escape from the trap of static and meaningless explanations, we propose a tool and a process called LIMEcraft. LIMEcraft enhances the process of explanation by allowing a user to interactively select semantically consistent areas and thoroughly examine the prediction for the image instance in case of many image features. Experiments on several models show that our tool improves model safety by inspecting model fairness for image pieces that may indicate model bias. The code is available at: http://github.com/MI2DataLab/LIMEcraf

    Interpretable Machine Learning Model for Clinical Decision Making

    Get PDF
    Despite machine learning models being increasingly used in medical decision-making and meeting classification predictive accuracy standards, they remain untrusted black-boxes due to decision-makers\u27 lack of insight into their complex logic. Therefore, it is necessary to develop interpretable machine learning models that will engender trust in the knowledge they generate and contribute to clinical decision-makers intention to adopt them in the field. The goal of this dissertation was to systematically investigate the applicability of interpretable model-agnostic methods to explain predictions of black-box machine learning models for medical decision-making. As proof of concept, this study addressed the problem of predicting the risk of emergency readmissions within 30 days of being discharged for heart failure patients. Using a benchmark data set, supervised classification models of differing complexity were trained to perform the prediction task. More specifically, Logistic Regression (LR), Random Forests (RF), Decision Trees (DT), and Gradient Boosting Machines (GBM) models were constructed using the Healthcare Cost and Utilization Project (HCUP) Nationwide Readmissions Database (NRD). The precision, recall, area under the ROC curve for each model were used to measure predictive accuracy. Local Interpretable Model-Agnostic Explanations (LIME) was used to generate explanations from the underlying trained models. LIME explanations were empirically evaluated using explanation stability and local fit (R2). The results demonstrated that local explanations generated by LIME created better estimates for Decision Trees (DT) classifiers

    Ad-Hoc Explanation for Time Series Classification

    Get PDF
    In this work, a perturbation-based model-agnostic explanation method for time series classification is presented. One of the main novelties of the proposed method is that the considered perturbations are interpretable and specific for time series. In real-world time series, variations in the speed or the scale of a particular action, for instance, may determine the class, so modifying this type of characteristic leads to ad-hoc explanations for time series. To this end, four perturbations or transformations are proposed: warp, scale, noise, and slice. Given a transformation, an interval of a series is considered relevant for the prediction of a classifier if a transformation in this interval changes the prediction. Another novelty is that the method provides a two-level explanation: a high-level explanation, where the robustness of the prediction with respect to a particular transformation is measured, and a low-level explanation, where the relevance of each region of the time series in the prediction is visualized. In order to analyze and validate our proposal, first some illustrative examples are provided, and then a thorough quantitative evaluation is carried out using a specifically designed evaluation procedure.PID2019-104966GB-I00 3KIA-KK2020/004

    Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations

    Get PDF
    Machine learning is currently undergoing an explosion in capability, popularity, and sophistication. However, one of the major barriers to widespread acceptance of machine learning (ML) is trustworthiness: most ML models operate as black boxes, their inner workings opaque and mysterious, and it can be difficult to trust their conclusions without understanding how those conclusions are reached. Explainability is therefore a key aspect of improving trustworthiness: the ability to better understand, interpret, and anticipate the behaviour of ML models. To this end, we propose SMILE, a new method that builds on previous approaches by making use of statistical distance measures to improve explainability while remaining applicable to a wide range of input data domains

    Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations

    Full text link
    Machine learning is currently undergoing an explosion in capability, popularity, and sophistication. However, one of the major barriers to widespread acceptance of machine learning (ML) is trustworthiness: most ML models operate as black boxes, their inner workings opaque and mysterious, and it can be difficult to trust their conclusions without understanding how those conclusions are reached. Explainability is therefore a key aspect of improving trustworthiness: the ability to better understand, interpret, and anticipate the behaviour of ML models. To this end, we propose SMILE, a new method that builds on previous approaches by making use of statistical distance measures to improve explainability while remaining applicable to a wide range of input data domains

    ExplainabilityAudit: An Automated Evaluation of Local Explainability in Rooftop Image Classification

    Get PDF
    Explainable Artificial Intelligence (XAI) is a key concept in building trustworthy machine learning models. Local explainability methods seek to provide explanations for individual predictions. Usually, humans must check these explanations manually. When large numbers of predictions are being made, this approach does not scale. We address this deficiency for a rooftop classification problem specifically with ExplainabilityAudit, a method that automatically evaluates explanations generated by a local explainability toolkit and identifies rooftop images that require further auditing by a human expert. The proposed method utilizes explanations generated by the Local Interpretable Model-Agnostic Explanations (LIME) framework as the most important superpixels of each validation rooftop image during the prediction. Then a bag of image patches is extracted from the superpixels to determine their texture and evaluate the local explanations. Our results show that 95.7% of the cases to be audited are detected by the proposed system
    • …
    corecore