3,342 research outputs found
Methods for Interpreting and Understanding Deep Neural Networks
This paper provides an entry point to the problem of interpreting a deep
neural network model and explaining its predictions. It is based on a tutorial
given at ICASSP 2017. It introduces some recently proposed techniques of
interpretation, along with theory, tricks and recommendations, to make most
efficient use of these techniques on real data. It also discusses a number of
practical applications.Comment: 14 pages, 10 figure
Learning with Algebraic Invariances, and the Invariant Kernel Trick
When solving data analysis problems it is important to integrate prior
knowledge and/or structural invariances. This paper contributes by a novel
framework for incorporating algebraic invariance structure into kernels. In
particular, we show that algebraic properties such as sign symmetries in data,
phase independence, scaling etc. can be included easily by essentially
performing the kernel trick twice. We demonstrate the usefulness of our theory
in simulations on selected applications such as sign-invariant spectral
clustering and underdetermined ICA
Bringing BCI into everyday life: Motor imagery in a pseudo realistic environment
Bringing Brain-Computer Interfaces (BCIs) into everyday life is a challenge because an out-of-lab environment implies the presence of variables that are largely beyond control of the user and the software application. This can severely corrupt signal quality as well as reliability of BCI control. Current BCI technology may fail in this application scenario because of the large amounts of noise, nonstationarity and movement artifacts. In this paper, we systematically investigate the performance of motor imagery BCI in a pseudo realistic environment. In our study 16 participants were asked to perform motor imagery tasks while dealing with different types of distractions such as vibratory stimulations or listening tasks. Our experiments demonstrate that standard BCI procedures are not robust to theses additional sources of noise, implicating that methods which work well in a lab environment, may perform poorly in realistic application scenarios. We discuss several promising research directions to tackle this important problem.BMBF, 01GQ1115, Adaptive Gehirn-Computer-Schnittstellen (BCI) in nichtstationären Umgebunge
Understanding and Comparing Deep Neural Networks for Age and Gender Classification
Recently, deep neural networks have demonstrated excellent performances in
recognizing the age and gender on human face images. However, these models were
applied in a black-box manner with no information provided about which facial
features are actually used for prediction and how these features depend on
image preprocessing, model initialization and architecture choice. We present a
study investigating these different effects.
In detail, our work compares four popular neural network architectures,
studies the effect of pretraining, evaluates the robustness of the considered
alignment preprocessings via cross-method test set swapping and intuitively
visualizes the model's prediction strategies in given preprocessing conditions
using the recent Layer-wise Relevance Propagation (LRP) algorithm. Our
evaluations on the challenging Adience benchmark show that suitable parameter
initialization leads to a holistic perception of the input, compensating
artefactual data representations. With a combination of simple preprocessing
steps, we reach state of the art performance in gender recognition.Comment: 8 pages, 5 figures, 5 tables. Presented at ICCV 2017 Workshop: 7th
IEEE International Workshop on Analysis and Modeling of Faces and Gesture
Explaining Recurrent Neural Network Predictions in Sentiment Analysis
Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown
to deliver insightful explanations in the form of input space relevances for
understanding feed-forward neural network classification decisions. In the
present work, we extend the usage of LRP to recurrent neural networks. We
propose a specific propagation rule applicable to multiplicative connections as
they arise in recurrent network architectures such as LSTMs and GRUs. We apply
our technique to a word-based bi-directional LSTM model on a five-class
sentiment prediction task, and evaluate the resulting LRP relevances both
qualitatively and quantitatively, obtaining better results than a
gradient-based related method which was used in previous work.Comment: 9 pages, 4 figures, accepted for EMNLP'17 Workshop on Computational
Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA
- …