18,901 research outputs found
Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks
As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a
variety of fields, there is an increasing interest in understanding the complex
internal mechanisms of DNNs. In this paper, we propose Relative Attributing
Propagation (RAP), which decomposes the output predictions of DNNs with a new
perspective of separating the relevant (positive) and irrelevant (negative)
attributions according to the relative influence between the layers. The
relevance of each neuron is identified with respect to its degree of
contribution, separated into positive and negative, while preserving the
conservation rule. Considering the relevance assigned to neurons in terms of
relative priority, RAP allows each neuron to be assigned with a bi-polar
importance score concerning the output: from highly relevant to highly
irrelevant. Therefore, our method makes it possible to interpret DNNs with much
clearer and attentive visualizations of the separated attributions than the
conventional explaining methods. To verify that the attributions propagated by
RAP correctly account for each meaning, we utilize the evaluation metrics: (i)
Outside-inside relevance ratio, (ii) Segmentation mIOU and (iii) Region
perturbation. In all experiments and metrics, we present a sizable gap in
comparison to the existing literature. Our source code is available in
\url{https://github.com/wjNam/Relative_Attributing_Propagation}.Comment: 8 pages, 7 figures, Accepted paper in AAAI Conference on Artificial
Intelligence (AAAI), 202
Deep Video Color Propagation
Traditional approaches for color propagation in videos rely on some form of
matching between consecutive video frames. Using appearance descriptors, colors
are then propagated both spatially and temporally. These methods, however, are
computationally expensive and do not take advantage of semantic information of
the scene. In this work we propose a deep learning framework for color
propagation that combines a local strategy, to propagate colors frame-by-frame
ensuring temporal stability, and a global strategy, using semantics for color
propagation within a longer range. Our evaluation shows the superiority of our
strategy over existing video and image color propagation methods as well as
neural photo-realistic style transfer approaches.Comment: BMVC 201
Scalable and Interpretable One-class SVMs with Deep Learning and Random Fourier features
One-class support vector machine (OC-SVM) for a long time has been one of the
most effective anomaly detection methods and extensively adopted in both
research as well as industrial applications. The biggest issue for OC-SVM is
yet the capability to operate with large and high-dimensional datasets due to
optimization complexity. Those problems might be mitigated via dimensionality
reduction techniques such as manifold learning or autoencoder. However,
previous work often treats representation learning and anomaly prediction
separately. In this paper, we propose autoencoder based one-class support
vector machine (AE-1SVM) that brings OC-SVM, with the aid of random Fourier
features to approximate the radial basis kernel, into deep learning context by
combining it with a representation learning architecture and jointly exploit
stochastic gradient descent to obtain end-to-end training. Interestingly, this
also opens up the possible use of gradient-based attribution methods to explain
the decision making for anomaly detection, which has ever been challenging as a
result of the implicit mappings between the input space and the kernel space.
To the best of our knowledge, this is the first work to study the
interpretability of deep learning in anomaly detection. We evaluate our method
on a wide range of unsupervised anomaly detection tasks in which our end-to-end
training architecture achieves a performance significantly better than the
previous work using separate training.Comment: Accepted at European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases (ECML-PKDD) 201
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
Understanding the flow of information in Deep Neural Networks (DNNs) is a
challenging problem that has gain increasing attention over the last few years.
While several methods have been proposed to explain network predictions, there
have been only a few attempts to compare them from a theoretical perspective.
What is more, no exhaustive empirical comparison has been performed in the
past. In this work, we analyze four gradient-based attribution methods and
formally prove conditions of equivalence and approximation between them. By
reformulating two of these methods, we construct a unified framework which
enables a direct comparison, as well as an easier implementation. Finally, we
propose a novel evaluation metric, called Sensitivity-n and test the
gradient-based attribution methods alongside with a simple perturbation-based
attribution method on several datasets in the domains of image and text
classification, using various network architectures.Comment: ICLR 201
- …