8,902 research outputs found
Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge
This paper presents a state-of-the-art model for visual question answering
(VQA), which won the first place in the 2017 VQA Challenge. VQA is a task of
significant importance for research in artificial intelligence, given its
multimodal nature, clear evaluation protocol, and potential real-world
applications. The performance of deep neural networks for VQA is very dependent
on choices of architectures and hyperparameters. To help further research in
the area, we describe in detail our high-performing, though relatively simple
model. Through a massive exploration of architectures and hyperparameters
representing more than 3,000 GPU-hours, we identified tips and tricks that lead
to its success, namely: sigmoid outputs, soft training targets, image features
from bottom-up attention, gated tanh activations, output embeddings initialized
using GloVe and Google Images, large mini-batches, and smart shuffling of
training data. We provide a detailed analysis of their impact on performance to
assist others in making an appropriate selection.Comment: Winner of the 2017 Visual Question Answering (VQA) Challenge at CVP
Coupled Ensembles of Neural Networks
We investigate in this paper the architecture of deep convolutional networks.
Building on existing state of the art models, we propose a reconfiguration of
the model parameters into several parallel branches at the global network
level, with each branch being a standalone CNN. We show that this arrangement
is an efficient way to significantly reduce the number of parameters without
losing performance or to significantly improve the performance with the same
level of performance. The use of branches brings an additional form of
regularization. In addition to the split into parallel branches, we propose a
tighter coupling of these branches by placing the "fuse (averaging) layer"
before the Log-Likelihood and SoftMax layers during training. This gives
another significant performance improvement, the tighter coupling favouring the
learning of better representations, even at the level of the individual
branches. We refer to this branched architecture as "coupled ensembles". The
approach is very generic and can be applied with almost any DCNN architecture.
With coupled ensembles of DenseNet-BC and parameter budget of 25M, we obtain
error rates of 2.92%, 15.68% and 1.50% respectively on CIFAR-10, CIFAR-100 and
SVHN tasks. For the same budget, DenseNet-BC has error rate of 3.46%, 17.18%,
and 1.8% respectively. With ensembles of coupled ensembles, of DenseNet-BC
networks, with 50M total parameters, we obtain error rates of 2.72%, 15.13% and
1.42% respectively on these tasks
Predictive Uncertainty through Quantization
High-risk domains require reliable confidence estimates from predictive
models. Deep latent variable models provide these, but suffer from the rigid
variational distributions used for tractable inference, which err on the side
of overconfidence. We propose Stochastic Quantized Activation Distributions
(SQUAD), which imposes a flexible yet tractable distribution over discretized
latent variables. The proposed method is scalable, self-normalizing and sample
efficient. We demonstrate that the model fully utilizes the flexible
distribution, learns interesting non-linearities, and provides predictive
uncertainty of competitive quality
ModDrop: adaptive multi-modal gesture recognition
We present a method for gesture detection and localisation based on
multi-scale and multi-modal deep learning. Each visual modality captures
spatial information at a particular spatial scale (such as motion of the upper
body or a hand), and the whole system operates at three temporal scales. Key to
our technique is a training strategy which exploits: i) careful initialization
of individual modalities; and ii) gradual fusion involving random dropping of
separate channels (dubbed ModDrop) for learning cross-modality correlations
while preserving uniqueness of each modality-specific representation. We
present experiments on the ChaLearn 2014 Looking at People Challenge gesture
recognition track, in which we placed first out of 17 teams. Fusing multiple
modalities at several spatial and temporal scales leads to a significant
increase in recognition rates, allowing the model to compensate for errors of
the individual classifiers as well as noise in the separate channels.
Futhermore, the proposed ModDrop training technique ensures robustness of the
classifier to missing signals in one or several channels to produce meaningful
predictions from any number of available modalities. In addition, we
demonstrate the applicability of the proposed fusion scheme to modalities of
arbitrary nature by experiments on the same dataset augmented with audio.Comment: 14 pages, 7 figure
- …