11 research outputs found

    Towards a Rigorous Evaluation of XAI Methods on Time Series

    Full text link
    Explainable Artificial Intelligence (XAI) methods are typically deployed to explain and debug black-box machine learning models. However, most proposed XAI methods are black-boxes themselves and designed for images. Thus, they rely on visual interpretability to evaluate and prove explanations. In this work, we apply XAI methods previously used in the image and text-domain on time series. We present a methodology to test and evaluate various XAI methods on time series by introducing new verification techniques to incorporate the temporal dimension. We further conduct preliminary experiments to assess the quality of selected XAI method explanations with various verification methods on a range of datasets and inspecting quality metrics on it. We demonstrate that in our initial experiments, SHAP works robust for all models, but others like DeepLIFT, LRP, and Saliency Maps work better with specific architectures.Comment: 5 Pages 1 Figure 1 Table 1 Page Reference - 2019 ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Model

    A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?

    Full text link
    Artificial intelligence (AI) models are increasingly finding applications in the field of medicine. Concerns have been raised about the explainability of the decisions that are made by these AI models. In this article, we give a systematic analysis of explainable artificial intelligence (XAI), with a primary focus on models that are currently being used in the field of healthcare. The literature search is conducted following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) standards for relevant work published from 1 January 2012 to 02 February 2022. The review analyzes the prevailing trends in XAI and lays out the major directions in which research is headed. We investigate the why, how, and when of the uses of these XAI models and their implications. We present a comprehensive examination of XAI methodologies as well as an explanation of how a trustworthy AI can be derived from describing AI models for healthcare fields. The discussion of this work will contribute to the formalization of the XAI field.Comment: 15 pages, 3 figures, accepted for publication in the IEEE Transactions on Artificial Intelligenc

    The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies

    Get PDF
    Artificial intelligence (AI) has huge potential to improve the health and well-being of people, but adoption in clinical practice is still limited. Lack of transparency is identified as one of the main barriers to implementation, as clinicians should be confident the AI system can be trusted. Explainable AI has the potential to overcome this issue and can be a step towards trustworthy AI. In this paper we review the recent literature to provide guidance to rese

    Evaluation of LSTM Explanations in Sentiment Classification Task

    Get PDF
    Deep learning techniques produce impressive performance in many natural language processing tasks. However, it is still difficult to understand what the neural network learned during training and prediction. Recently, Explainable Artificial Intelligence (XAI) is becoming a popular technique to interpret deep neural networks. In this work, we extend the existing Layer-wise Relevance Propagation (LRP) framework and propose novel strategies on passing relevance through weighted linear and multiplicative connections in LSTM. Then we evaluate these explainable methods on a bidirectional LSTM classifier by performing four word-level experiments: sentiment decomposition, top representative words collection, word perturbation and case study. The results indicate that the epsilon-LRP-all method outperforms other methods in this task, due to its ability to generate reasonable word-level relevance, collect reliable sentiment words and detect negation patterns in text data. Our work provides an insight on explaining recurrent neural networks and adapting explainable methods to various applications
    corecore