6,653 research outputs found
Explaining nonlinear classification decisions with deep Taylor decomposition
Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems such as image recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method called deep Taylor decomposition efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets
Methods for Interpreting and Understanding Deep Neural Networks
This paper provides an entry point to the problem of interpreting a deep
neural network model and explaining its predictions. It is based on a tutorial
given at ICASSP 2017. It introduces some recently proposed techniques of
interpretation, along with theory, tricks and recommendations, to make most
efficient use of these techniques on real data. It also discusses a number of
practical applications.Comment: 14 pages, 10 figure
????????? ?????? ???????????? ?????? ???????????? ?????? ??????????????? ???????????? ????????? ???????????? ??????
Department of Computer Science and EngineeringAs deep learning has grown fast, so did the desire to interpret deep learning black boxes. As
a result, many analysis tools have emerged to interpret it. Interpretation in deep learning has
in fact popularized the use of deep learning in many areas including research, manufacturing,
finance, and healthcare which needs relatively accurate and reliable decision making process.
However, there is something we should not overlook. It is uncertainty. Uncertainties of models
are directly reflected in the results of interpretations of model decision as explaining tools are
dependent to models. Therefore, uncertainties of interpreting output from deep learning models
should be also taken into account as quality and cost are directly impacted by measurement
uncertainty. This attempt has not been made yet.
Therefore, we suggest Bayesian input attribution rather than discrete input attribution by
approximating Bayesian inference in deep Gaussian process through dropout to input attribution
in this paper. Then we extract candidates that can sufficiently affect the output of the model,
taking into account both input attribution itself and uncertainty of it.clos
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Current learning machines have successfully solved hard application problems,
reaching high accuracy and displaying seemingly "intelligent" behavior. Here we
apply recent techniques for explaining decisions of state-of-the-art learning
machines and analyze various tasks from computer vision and arcade games. This
showcases a spectrum of problem-solving behaviors ranging from naive and
short-sighted, to well-informed and strategic. We observe that standard
performance evaluation metrics can be oblivious to distinguishing these diverse
problem solving behaviors. Furthermore, we propose our semi-automated Spectral
Relevance Analysis that provides a practically effective way of characterizing
and validating the behavior of nonlinear learning machines. This helps to
assess whether a learned model indeed delivers reliably for the problem that it
was conceived for. Furthermore, our work intends to add a voice of caution to
the ongoing excitement about machine intelligence and pledges to evaluate and
judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
Understanding the flow of information in Deep Neural Networks (DNNs) is a
challenging problem that has gain increasing attention over the last few years.
While several methods have been proposed to explain network predictions, there
have been only a few attempts to compare them from a theoretical perspective.
What is more, no exhaustive empirical comparison has been performed in the
past. In this work, we analyze four gradient-based attribution methods and
formally prove conditions of equivalence and approximation between them. By
reformulating two of these methods, we construct a unified framework which
enables a direct comparison, as well as an easier implementation. Finally, we
propose a novel evaluation metric, called Sensitivity-n and test the
gradient-based attribution methods alongside with a simple perturbation-based
attribution method on several datasets in the domains of image and text
classification, using various network architectures.Comment: ICLR 201
- …