5,735 research outputs found
Why do These Match? Explaining the Behavior of Image Similarity Models
Explaining a deep learning model can help users understand its behavior and
allow researchers to discern its shortcomings. Recent work has primarily
focused on explaining models for tasks like image classification or visual
question answering. In this paper, we introduce Salient Attributes for Network
Explanation (SANE) to explain image similarity models, where a model's output
is a score measuring the similarity of two inputs rather than a classification
score. In this task, an explanation depends on both of the input images, so
standard methods do not apply. Our SANE explanations pairs a saliency map
identifying important image regions with an attribute that best explains the
match. We find that our explanations provide additional information not
typically captured by saliency maps alone, and can also improve performance on
the classic task of attribute recognition. Our approach's ability to generalize
is demonstrated on two datasets from diverse domains, Polyvore Outfits and
Animals with Attributes 2. Code available at:
https://github.com/VisionLearningGroup/SANEComment: Accepted at ECCV 202
Explaining Explanations in AI
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly
Right for the Right Reason: Training Agnostic Networks
We consider the problem of a neural network being requested to classify
images (or other inputs) without making implicit use of a "protected concept",
that is a concept that should not play any role in the decision of the network.
Typically these concepts include information such as gender or race, or other
contextual information such as image backgrounds that might be implicitly
reflected in unknown correlations with other variables, making it insufficient
to simply remove them from the input features. In other words, making accurate
predictions is not good enough if those predictions rely on information that
should not be used: predictive performance is not the only important metric for
learning systems. We apply a method developed in the context of domain
adaptation to address this problem of "being right for the right reason", where
we request a classifier to make a decision in a way that is entirely 'agnostic'
to a given protected concept (e.g. gender, race, background etc.), even if this
could be implicitly reflected in other attributes via unknown correlations.
After defining the concept of an 'agnostic model', we demonstrate how the
Domain-Adversarial Neural Network can remove unwanted information from a model
using a gradient reversal layer.Comment: Author's original versio
- …