2,990 research outputs found
Methods for Interpreting and Understanding Deep Neural Networks
This paper provides an entry point to the problem of interpreting a deep
neural network model and explaining its predictions. It is based on a tutorial
given at ICASSP 2017. It introduces some recently proposed techniques of
interpretation, along with theory, tricks and recommendations, to make most
efficient use of these techniques on real data. It also discusses a number of
practical applications.Comment: 14 pages, 10 figure
Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks
As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a
variety of fields, there is an increasing interest in understanding the complex
internal mechanisms of DNNs. In this paper, we propose Relative Attributing
Propagation (RAP), which decomposes the output predictions of DNNs with a new
perspective of separating the relevant (positive) and irrelevant (negative)
attributions according to the relative influence between the layers. The
relevance of each neuron is identified with respect to its degree of
contribution, separated into positive and negative, while preserving the
conservation rule. Considering the relevance assigned to neurons in terms of
relative priority, RAP allows each neuron to be assigned with a bi-polar
importance score concerning the output: from highly relevant to highly
irrelevant. Therefore, our method makes it possible to interpret DNNs with much
clearer and attentive visualizations of the separated attributions than the
conventional explaining methods. To verify that the attributions propagated by
RAP correctly account for each meaning, we utilize the evaluation metrics: (i)
Outside-inside relevance ratio, (ii) Segmentation mIOU and (iii) Region
perturbation. In all experiments and metrics, we present a sizable gap in
comparison to the existing literature. Our source code is available in
\url{https://github.com/wjNam/Relative_Attributing_Propagation}.Comment: 8 pages, 7 figures, Accepted paper in AAAI Conference on Artificial
Intelligence (AAAI), 202
- …