153 research outputs found

    Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective

    Full text link
    Given the complexity and lack of transparency in deep neural networks (DNNs), extensive efforts have been made to make these systems more interpretable or explain their behaviors in accessible terms. Unlike most reviews, which focus on algorithmic and model-centric perspectives, this work takes a "data-centric" view, examining how data collection, processing, and analysis contribute to explainable AI (XAI). We categorize existing work into three categories subject to their purposes: interpretations of deep models, referring to feature attributions and reasoning processes that correlate data points with model outputs; influences of training data, examining the impact of training data nuances, such as data valuation and sample anomalies, on decision-making processes; and insights of domain knowledge, discovering latent patterns and fostering new knowledge from data and models to advance social values and scientific discovery. Specifically, we distill XAI methodologies into data mining operations on training and testing data across modalities, such as images, text, and tabular data, as well as on training logs, checkpoints, models and other DNN behavior descriptors. In this way, our study offers a comprehensive, data-centric examination of XAI from a lens of data mining methods and applications

    The Explanation Dialogues: Understanding How Legal Experts Reason About XAI Methods

    Get PDF
    The Explanation Dialogues project is an expert focus study that aims to uncover expectations, reasoning, and rules of legal experts and practitioners towards explainable artificial intelligence (XAI). We examine legal perceptions and disputes that arise in a fictional scenario that resembles a daily life situation - a bank’s use of an automated decision-making (ADM) system to decide on credit allocation to individuals. Through this simulation, the study aims to provide insights into the legal value and validity of explanations of ADMs, identify potential gaps and issues that may arise in the context of compliance with European legislation, and provide guidance on how to address these shortcomings

    Unified Explanations in Machine Learning Models: A Perturbation Approach

    Get PDF
    A high-velocity paradigm shift towards Explainable Artificial Intelligence (XAI) has emerged in recent years. Highly complex Machine Learning (ML) models have flourished in many tasks of intelligence, and the questions have started to shift away from traditional metrics of validity towards something deeper: What is this model telling me about my data, and how is it arriving at these conclusions? Previous work has uncovered predictive models generating explanations contrasting domain experts, or excessively exploiting bias in data that renders a model useless in highly-regulated settings. These inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches. To address these problems, we propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap). We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold. We propose a taxonomy for feature importance methodology, measure alignment, and observe quantifiable similarity amongst explanation models across several datasets

    Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence

    Get PDF
    Machine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization ('neural recording'). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications

    Interpretable Narrative Explanation for ML Predictors with LP: A Case Study for XAI

    Get PDF
    In the era of digital revolution, individual lives are going to cross and interconnect ubiquitous online domains and offline reality based on smart technologies\u2014discovering, storing, processing, learning, analysing, and predicting from huge amounts of environment-collected data. Sub-symbolic techniques, such as deep learning, play a key role there, yet they are often built as black boxes, which are not inspectable, interpretable, explainable. New research efforts towards explainable artificial intelligence (XAI) are trying to address those issues, with the final purpose of building understandable, accountable, and trustable AI systems\u2014still, seemingly with a long way to go. Generally speaking, while we fully understand and appreciate the power of sub-symbolic approaches, we believe that symbolic approaches to machine intelligence, once properly combined with sub-symbolic ones, have a critical role to play in order to achieve key properties of XAI such as observability, interpretability, explainability, accountability, and trustability. In this paper we describe an example of integration of symbolic and sub-symbolic techniques. First, we sketch a general framework where symbolic and sub-symbolic approaches could fruitfully combine to produce intelligent behaviour in AI applications. Then, we focus in particular on the goal of building a narrative explanation for ML predictors: to this end, we exploit the logical knowledge obtained translating decision tree predictors into logical programs

    Using SHAP Values to Validate Model’s Uncertain Decision for ML-based Lightpath Quality-of-Transmission Estimation

    Get PDF
    We apply Quantile Regression (QR) for lightpath quality-of-transmission (QoT) estimation with the aim of identifying uncertain decisions and then exploit Shapley Additive Explanations (SHAP) to quantify lightpath features’ importance by means of SHAP values and validate the model’s decisions in a post-processing phase. Numerical results show that our approach can eliminate more than 98% of false predictions and that SHAP values can validate up to 90% of the model's uncertain decisions

    Metriken der Ungleichheit sind uralt

    Get PDF

    Sensitivity analysis: A discipline coming of age

    Get PDF
    Sensitivity analysis (SA) as a ‘formal’ and ‘standard’ component of scientific development and policy support is relatively young. Many researchers and practitioners from a wide range of disciplines have contributed to SA over the last three decades, and the SAMO (sensitivity analysis of model output) conferences, since 1995, have been the primary driver of breeding a community culture in this heterogeneous population. Now, SA is evolving into a mature and independent field of science, indeed a discipline with emerging applications extending well into new areas such as data science and machine learning. At this growth stage, the present editorial leads a special issue consisting of one Position Paper on “The future of sensitivity analysis” and 11 research papers on “Sensitivity analysis for environmental modelling” published in Environmental Modelling & Software in 2020–21.publishedVersio

    How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice

    Full text link
    Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness. Various and significantly different algorithmic methods to provide this explainability have been introduced in the field, but the existing literature in the machine learning community has paid little attention to the stakeholder whose needs are rather studied in the human-computer interface community. Therefore, organizations that want or need to provide this explainability are confronted with the selection of an appropriate method for their use case. In this paper, we argue there is a need for a methodology to bridge the gap between stakeholder needs and explanation methods. We present our ongoing work on creating this methodology to help data scientists in the process of providing explainability to stakeholders. In particular, our contributions include documents used to characterize XAI methods and user requirements (shown in Appendix), which our methodology builds upon

    Adaptation of Applications to Compare Development Frameworks in Deep Learning for Decentralized Android Applications

    Get PDF
    Not all frameworks used in machine learning and deep learning integrate with Android, which requires some prerequisites. The primary objective of this paper is to present the results of the analysis and a comparison of deep learning development frameworks, which can be adapted into fully decentralized Android apps from a cloud server. As a work methodology, we develop and/or modify the test applications that these frameworks offer us a priori in such a way that it allows an equitable comparison of the analysed characteristics of interest. These parameters are related to attributes that a user would consider, such as (1) percentage of success; (2) battery consumption; and (3) power consumption of the processor. After analysing numerical results, the proposed framework that best behaves in relation to the analysed characteristics for the development of an Android application is TensorFlow, which obtained the best score against Caffe2 and Snapdragon NPE in the percentage of correct answers, battery consumption, and device CPU power consumption. Data consumption was not considered because we focus on decentralized cloud storage applications in this study
    • 

    corecore