796 research outputs found
A Robust Interpretable Deep Learning Classifier for Heart Anomaly Detection Without Segmentation
Traditionally, abnormal heart sound classification is framed as a three-stage
process. The first stage involves segmenting the phonocardiogram to detect
fundamental heart sounds; after which features are extracted and classification
is performed. Some researchers in the field argue the segmentation step is an
unwanted computational burden, whereas others embrace it as a prior step to
feature extraction. When comparing accuracies achieved by studies that have
segmented heart sounds before analysis with those who have overlooked that
step, the question of whether to segment heart sounds before feature extraction
is still open. In this study, we explicitly examine the importance of heart
sound segmentation as a prior step for heart sound classification, and then
seek to apply the obtained insights to propose a robust classifier for abnormal
heart sound detection. Furthermore, recognizing the pressing need for
explainable Artificial Intelligence (AI) models in the medical domain, we also
unveil hidden representations learned by the classifier using model
interpretation techniques. Experimental results demonstrate that the
segmentation plays an essential role in abnormal heart sound classification.
Our new classifier is also shown to be robust, stable and most importantly,
explainable, with an accuracy of almost 100% on the widely used PhysioNet
dataset
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
Understanding the flow of information in Deep Neural Networks (DNNs) is a
challenging problem that has gain increasing attention over the last few years.
While several methods have been proposed to explain network predictions, there
have been only a few attempts to compare them from a theoretical perspective.
What is more, no exhaustive empirical comparison has been performed in the
past. In this work, we analyze four gradient-based attribution methods and
formally prove conditions of equivalence and approximation between them. By
reformulating two of these methods, we construct a unified framework which
enables a direct comparison, as well as an easier implementation. Finally, we
propose a novel evaluation metric, called Sensitivity-n and test the
gradient-based attribution methods alongside with a simple perturbation-based
attribution method on several datasets in the domains of image and text
classification, using various network architectures.Comment: ICLR 201
A study on the Interpretability of Neural Retrieval Models using DeepSHAP
A recent trend in IR has been the usage of neural networks to learn retrieval
models for text based adhoc search. While various approaches and architectures
have yielded significantly better performance than traditional retrieval models
such as BM25, it is still difficult to understand exactly why a document is
relevant to a query. In the ML community several approaches for explaining
decisions made by deep neural networks have been proposed -- including DeepSHAP
which modifies the DeepLift algorithm to estimate the relative importance
(shapley values) of input features for a given decision by comparing the
activations in the network for a given image against the activations caused by
a reference input. In image classification, the reference input tends to be a
plain black image. While DeepSHAP has been well studied for image
classification tasks, it remains to be seen how we can adapt it to explain the
output of Neural Retrieval Models (NRMs). In particular, what is a good "black"
image in the context of IR? In this paper we explored various reference input
document construction techniques. Additionally, we compared the explanations
generated by DeepSHAP to LIME (a model agnostic approach) and found that the
explanations differ considerably. Our study raises concerns regarding the
robustness and accuracy of explanations produced for NRMs. With this paper we
aim to shed light on interesting problems surrounding interpretability in NRMs
and highlight areas of future work.Comment: 4 pages; SIGIR 2019 Short Pape
- …