771 research outputs found
Areas of Attention for Image Captioning
We propose "Areas of Attention", a novel attention-based model for automatic
image captioning. Our approach models the dependencies between image regions,
caption words, and the state of an RNN language model, using three pairwise
interactions. In contrast to previous attention-based approaches that associate
image regions only to the RNN state, our method allows a direct association
between caption words and image regions. During training these associations are
inferred from image-level captions, akin to weakly-supervised object detector
training. These associations help to improve captioning by localizing the
corresponding regions during testing. We also propose and compare different
ways of generating attention areas: CNN activation grids, object proposals, and
spatial transformers nets applied in a convolutional fashion. Spatial
transformers give the best results. They allow for image specific attention
areas, and can be trained jointly with the rest of the network. Our attention
mechanism and spatial transformer attention areas together yield
state-of-the-art results on the MSCOCO dataset.o meaningful latent semantic
structure in the generated captions.Comment: Accepted in ICCV 201
NMTPY: A Flexible Toolkit for Advanced Neural Machine Translation Systems
In this paper, we present nmtpy, a flexible Python toolkit based on Theano
for training Neural Machine Translation and other neural sequence-to-sequence
architectures. nmtpy decouples the specification of a network from the training
and inference utilities to simplify the addition of a new architecture and
reduce the amount of boilerplate code to be written. nmtpy has been used for
LIUM's top-ranked submissions to WMT Multimodal Machine Translation and News
Translation tasks in 2016 and 2017.Comment: 10 pages, 3 figure
Exploring the sequence length bottleneck in the Transformer for Image Captioning
Most recent state of the art architectures rely on combinations and
variations of three approaches: convolutional, recurrent and self-attentive
methods. Our work attempts in laying the basis for a new research direction for
sequence modeling based upon the idea of modifying the sequence length. In
order to do that, we propose a new method called "Expansion Mechanism" which
transforms either dynamically or statically the input sequence into a new one
featuring a different sequence length. Furthermore, we introduce a novel
architecture that exploits such method and achieves competitive performances on
the MS-COCO 2014 data set, yielding 134.6 and 131.4 CIDEr-D on the Karpathy
test split in the ensemble and single model configuration respectively and 130
CIDEr-D in the official online evaluation server, despite being neither
recurrent nor fully attentive. At the same time we address the efficiency
aspect in our design and introduce a convenient training strategy suitable for
most computational resources in contrast to the standard one. Source code is
available at https://github.com/jchenghu/explorin
- …