19 research outputs found
Overview of Class Activation Maps for Visualization Explainability
Recent research in deep learning methodology has led to a variety of complex
modelling techniques in computer vision (CV) that reach or even outperform
human performance. Although these black-box deep learning models have obtained
astounding results, they are limited in their interpretability and transparency
which are critical to take learning machines to the next step to include them
in sensitive decision-support systems involving human supervision. Hence, the
development of explainable techniques for computer vision (XCV) has recently
attracted increasing attention. In the realm of XCV, Class Activation Maps
(CAMs) have become widely recognized and utilized for enhancing
interpretability and insights into the decision-making process of deep learning
models. This work presents a comprehensive overview of the evolution of Class
Activation Map methods over time. It also explores the metrics used for
evaluating CAMs and introduces auxiliary techniques to improve the saliency of
these methods. The overview concludes by proposing potential avenues for future
research in this evolving field.Comment: 6 page
Explaining Classifiers using Adversarial Perturbations on the Perceptual Ball
We present a simple regularization of adversarial perturbations based upon
the perceptual loss. While the resulting perturbations remain imperceptible to
the human eye, they differ from existing adversarial perturbations in that they
are semi-sparse alterations that highlight objects and regions of interest
while leaving the background unaltered. As a semantically meaningful adverse
perturbations, it forms a bridge between counterfactual explanations and
adversarial perturbations in the space of images. We evaluate our approach on
several standard explainability benchmarks, namely, weak localization,
insertion deletion, and the pointing game demonstrating that perceptually
regularized counterfactuals are an effective explanation for image-based
classifiers.Comment: CVPR 202
On Saliency Maps and Adversarial Robustness
A Very recent trend has emerged to couple the notion of interpretability and
adversarial robustness, unlike earlier efforts which solely focused on good
interpretations or robustness against adversaries. Works have shown that
adversarially trained models exhibit more interpretable saliency maps than
their non-robust counterparts, and that this behavior can be quantified by
considering the alignment between input image and saliency map. In this work,
we provide a different perspective to this coupling, and provide a method,
Saliency based Adversarial training (SAT), to use saliency maps to improve
adversarial robustness of a model. In particular, we show that using
annotations such as bounding boxes and segmentation masks, already provided
with a dataset, as weak saliency maps, suffices to improve adversarial
robustness with no additional effort to generate the perturbations themselves.
Our empirical results on CIFAR-10, CIFAR-100, Tiny ImageNet and Flower-17
datasets consistently corroborate our claim, by showing improved adversarial
robustness using our method. saliency maps. We also show how using finer and
stronger saliency maps leads to more robust models, and how integrating SAT
with existing adversarial training methods, further boosts performance of these
existing methods.Comment: Accepted at ECML-PKDD 2020, Acknowledgements adde