20,445 research outputs found

    Explaining Explainable Artificial Intelligence: An integrative model of objective and subjective influences on XAI

    Get PDF
    Explainable artificial intelligence (XAI) is a new field within artificial intelligence (AI) and machine learning (ML). XAI offers a transparency of AI and ML that can bridge the gap in information that has been absent from “black-box” ML models. Given its nascency, there are several taxonomies of XAI in the literature. The current paper incorporates the taxonomies in the literature into one unifying framework, which defines the types of explanations, types of transparency, and model methods that together inform the user’s processes towards developing trust in AI and ML systems

    Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective

    Full text link
    Given the complexity and lack of transparency in deep neural networks (DNNs), extensive efforts have been made to make these systems more interpretable or explain their behaviors in accessible terms. Unlike most reviews, which focus on algorithmic and model-centric perspectives, this work takes a "data-centric" view, examining how data collection, processing, and analysis contribute to explainable AI (XAI). We categorize existing work into three categories subject to their purposes: interpretations of deep models, referring to feature attributions and reasoning processes that correlate data points with model outputs; influences of training data, examining the impact of training data nuances, such as data valuation and sample anomalies, on decision-making processes; and insights of domain knowledge, discovering latent patterns and fostering new knowledge from data and models to advance social values and scientific discovery. Specifically, we distill XAI methodologies into data mining operations on training and testing data across modalities, such as images, text, and tabular data, as well as on training logs, checkpoints, models and other DNN behavior descriptors. In this way, our study offers a comprehensive, data-centric examination of XAI from a lens of data mining methods and applications

    Explainable Artificial Intelligence (XAI) towards Model Personality in NLP task

    Get PDF
    In recent years, the development of Deep Learning in the field of Natural Language Processing, especially in sentiment analysis, has achieved significant progress and success. It is because of the availability of large amounts of text data and the ability of deep learning techniques to produce sophisticated predictive results from various data features. However, the sophisticated predictions that are not accompanied by sufficient information on what is happening in the model will be a major setback. Therefore, the significant development of the Deep Learning model must be accompanied by the development of the XAI method, which helps provide information about what drives the model to get predictable results. Simple Bidirectional LSTM and complex Bi-GRU-LSTM-CNN model for Sentiment Analysis were proposed in the present research. Both models were analyzed further using three different XAI methods (LIME, SHAP, and Anchor) in which they were used and compared to two proposed models, proving that XAI is not limited to giving information about what happens in the model but can also help us to understand and distinguish models’ personality and behaviour

    A roadmap towards breast cancer therapies supported by explainable artificial intelligence

    Get PDF
    In recent years personalized medicine reached an increasing importance, especially in the design of oncological therapies. In particular, the development of patients’ profiling strategies suggests the possibility of promising rewards. In this work, we present an explainable artificial intelligence (XAI) framework based on an adaptive dimensional reduction which (i) outlines the most important clinical features for oncological patients’ profiling and (ii), based on these features, determines the profile, i.e., the cluster a patient belongs to. For these purposes, we collected a cohort of 267 breast cancer patients. The adopted dimensional reduction method determines the relevant subspace where distances among patients are used by a hierarchical clustering procedure to identify the corresponding optimal categories. Our results demonstrate how the molecular subtype is the most important feature for clustering. Then, we assessed the robustness of current therapies and guidelines; our findings show a striking correspondence between available patients’ profiles determined in an unsupervised way and either molecular subtypes or therapies chosen according to guidelines, which guarantees the interpretability characterizing explainable approaches to machine learning techniques. Accordingly, our work suggests the possibility to design data-driven therapies to emphasize the differences observed among the patients
    • 

    corecore