146,485 research outputs found
Temporal Attention-Gated Model for Robust Sequence Classification
Typical techniques for sequence classification are designed for
well-segmented sequences which have been edited to remove noisy or irrelevant
parts. Therefore, such methods cannot be easily applied on noisy sequences
expected in real-world applications. In this paper, we present the Temporal
Attention-Gated Model (TAGM) which integrates ideas from attention models and
gated recurrent networks to better deal with noisy or unsegmented sequences.
Specifically, we extend the concept of attention model to measure the relevance
of each observation (time step) of a sequence. We then use a novel gated
recurrent network to learn the hidden representation for the final prediction.
An important advantage of our approach is interpretability since the temporal
attention weights provide a meaningful value for the salience of each time step
in the sequence. We demonstrate the merits of our TAGM approach, both for
prediction accuracy and interpretability, on three different tasks: spoken
digit recognition, text-based sentiment analysis and visual event recognition.Comment: Accepted by CVPR 201
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
Understanding the flow of information in Deep Neural Networks (DNNs) is a
challenging problem that has gain increasing attention over the last few years.
While several methods have been proposed to explain network predictions, there
have been only a few attempts to compare them from a theoretical perspective.
What is more, no exhaustive empirical comparison has been performed in the
past. In this work, we analyze four gradient-based attribution methods and
formally prove conditions of equivalence and approximation between them. By
reformulating two of these methods, we construct a unified framework which
enables a direct comparison, as well as an easier implementation. Finally, we
propose a novel evaluation metric, called Sensitivity-n and test the
gradient-based attribution methods alongside with a simple perturbation-based
attribution method on several datasets in the domains of image and text
classification, using various network architectures.Comment: ICLR 201
Large-scale Multi-label Text Classification - Revisiting Neural Networks
Neural networks have recently been proposed for multi-label classification
because they are able to capture and model label dependencies in the output
layer. In this work, we investigate limitations of BP-MLL, a neural network
(NN) architecture that aims at minimizing pairwise ranking error. Instead, we
propose to use a comparably simple NN approach with recently proposed learning
techniques for large-scale multi-label text classification tasks. In
particular, we show that BP-MLL's ranking loss minimization can be efficiently
and effectively replaced with the commonly used cross entropy error function,
and demonstrate that several advances in neural network training that have been
developed in the realm of deep learning can be effectively employed in this
setting. Our experimental results show that simple NN models equipped with
advanced techniques such as rectified linear units, dropout, and AdaGrad
perform as well as or even outperform state-of-the-art approaches on six
large-scale textual datasets with diverse characteristics.Comment: 16 pages, 4 figures, submitted to ECML 201
Hopfield Networks in Relevance and Redundancy Feature Selection Applied to Classification of Biomedical High-Resolution Micro-CT Images
We study filterâbased feature selection methods for classification of biomedical images. For feature selection, we use two filters â a relevance filter which measures usefulness of individual features for target prediction, and a redundancy filter, which measures similarity between features. As selection method that combines relevance and redundancy we try out a Hopfield network. We experimentally compare selection methods, running unitary redundancy and relevance filters, against a greedy algorithm with redundancy thresholds [9], the min-redundancy max-relevance integration [8,23,36], and our Hopfield network selection. We conclude that on the whole, Hopfield selection was one of the most successful methods, outperforming min-redundancy max-relevance when\ud
more features are selected
Localized Regression
The main problem with localized discriminant techniques is the curse of dimensionality, which seems to restrict their use to the case of few variables. This restriction does not hold if localization is combined with a reduction of dimension. In particular it is shown that localization yields powerful classifiers even in higher dimensions if localization is combined with locally adaptive selection of predictors. A robust localized logistic regression (LLR) method is developed for which all tuning parameters are chosen dataÂĄadaptively. In an extended simulation study we evaluate the potential of the proposed procedure for various types of data and compare it to other classification procedures. In addition we demonstrate that automatic choice of localization, predictor selection and penalty parameters based on cross validation is working well. Finally the method is applied to real data sets and its real world performance is compared to alternative procedures
- âŚ