5,129 research outputs found
A generative-discriminative learning model for noisy information fusion
International audienceThis article is concerned with the acquisition of multimodal integration capacities by learning algorithms. Humans seem to perform statistically optimal fusion, and this ability may be gradually learned from experience. In order to stress the advantage of learning approaches in contrast to hand-coded models, we propose a generative-discriminative learning architecture that avoids simplifying assumptions on prior distributions and copes with realistic relationships between observations and underlying values. We base our investigation on a simple self-organized approach, for which we show statistical near-optimality properties by explicit comparison to an equivalent Bayesian model on a realistic artificial dataset
Temporal Attention-Gated Model for Robust Sequence Classification
Typical techniques for sequence classification are designed for
well-segmented sequences which have been edited to remove noisy or irrelevant
parts. Therefore, such methods cannot be easily applied on noisy sequences
expected in real-world applications. In this paper, we present the Temporal
Attention-Gated Model (TAGM) which integrates ideas from attention models and
gated recurrent networks to better deal with noisy or unsegmented sequences.
Specifically, we extend the concept of attention model to measure the relevance
of each observation (time step) of a sequence. We then use a novel gated
recurrent network to learn the hidden representation for the final prediction.
An important advantage of our approach is interpretability since the temporal
attention weights provide a meaningful value for the salience of each time step
in the sequence. We demonstrate the merits of our TAGM approach, both for
prediction accuracy and interpretability, on three different tasks: spoken
digit recognition, text-based sentiment analysis and visual event recognition.Comment: Accepted by CVPR 201
- …