60,304 research outputs found
Looking Deeper into Deep Learning Model: Attribution-based Explanations of TextCNN
Layer-wise Relevance Propagation (LRP) and saliency maps have been recently
used to explain the predictions of Deep Learning models, specifically in the
domain of text classification. Given different attribution-based explanations
to highlight relevant words for a predicted class label, experiments based on
word deleting perturbation is a common evaluation method. This word removal
approach, however, disregards any linguistic dependencies that may exist
between words or phrases in a sentence, which could semantically guide a
classifier to a particular prediction. In this paper, we present a
feature-based evaluation framework for comparing the two attribution methods on
customer reviews (public data sets) and Customer Due Diligence (CDD) extracted
reports (corporate data set). Instead of removing words based on the relevance
score, we investigate perturbations based on embedded features removal from
intermediate layers of Convolutional Neural Networks. Our experimental study is
carried out on embedded-word, embedded-document, and embedded-ngrams
explanations. Using the proposed framework, we provide a visualization tool to
assist analysts in reasoning toward the model's final prediction.Comment: NIPS 2018 Workshop on Challenges and Opportunities for AI in
Financial Services: the Impact of Fairness, Explainability, Accuracy, and
Privacy, Montr\'eal, Canad
Active Contour Models for Manifold Valued Image Segmentation
Image segmentation is the process of partitioning a image into different
regions or groups based on some characteristics like color, texture, motion or
shape etc. Active contours is a popular variational method for object
segmentation in images, in which the user initializes a contour which evolves
in order to optimize an objective function designed such that the desired
object boundary is the optimal solution. Recently, imaging modalities that
produce Manifold valued images have come up, for example, DT-MRI images, vector
fields. The traditional active contour model does not work on such images. In
this paper, we generalize the active contour model to work on Manifold valued
images. As expected, our algorithm detects regions with similar Manifold values
in the image. Our algorithm also produces expected results on usual gray-scale
images, since these are nothing but trivial examples of Manifold valued images.
As another application of our general active contour model, we perform texture
segmentation on gray-scale images by first creating an appropriate Manifold
valued image. We demonstrate segmentation results for manifold valued images
and texture images
On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors
Deep learning based medical image classifiers have shown remarkable prowess
in various application areas like ophthalmology, dermatology, pathology, and
radiology. However, the acceptance of these Computer-Aided Diagnosis (CAD)
systems in real clinical setups is severely limited primarily because their
decision-making process remains largely obscure. This work aims at elucidating
a deep learning based medical image classifier by verifying that the model
learns and utilizes similar disease-related concepts as described and employed
by dermatologists. We used a well-trained and high performing neural network
developed by REasoning for COmplex Data (RECOD) Lab for classification of three
skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis and
performed a detailed analysis on its latent space. Two well established and
publicly available skin disease datasets, PH2 and derm7pt, are used for
experimentation. Human understandable concepts are mapped to RECOD image
classification model with the help of Concept Activation Vectors (CAVs),
introducing a novel training and significance testing paradigm for CAVs. Our
results on an independent evaluation set clearly shows that the classifier
learns and encodes human understandable concepts in its latent representation.
Additionally, TCAV scores (Testing with CAVs) suggest that the neural network
indeed makes use of disease-related concepts in the correct way when making
predictions. We anticipate that this work can not only increase confidence of
medical practitioners on CAD but also serve as a stepping stone for further
development of CAV-based neural network interpretation methods.Comment: Accepted for the IEEE International Joint Conference on Neural
Networks (IJCNN) 202
- …