41 research outputs found
What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator?
In neural image captioning systems, a recurrent neural network (RNN) is
typically viewed as the primary `generation' component. This view suggests that
the image features should be `injected' into the RNN. This is in fact the
dominant view in the literature. Alternatively, the RNN can instead be viewed
as only encoding the previously generated words. This view suggests that the
RNN should only be used to encode linguistic features and that only the final
representation should be `merged' with the image features at a later stage.
This paper compares these two architectures. We find that, in general, late
merging outperforms injection, suggesting that RNNs are better viewed as
encoders, rather than generators.Comment: Appears in: Proceedings of the 10th International Conference on
Natural Language Generation (INLG'17
Where to put the image in an image caption generator
When a neural language model is used for caption generation, the
image information can be fed to the neural network either by directly in-
corporating it in a recurrent neural network { conditioning the language
model by injecting image features { or in a layer following the recurrent
neural network { conditioning the language model by merging the image
features. While merging implies that visual features are bound at the end
of the caption generation process, injecting can bind the visual features
at a variety stages. In this paper we empirically show that late binding
is superior to early binding in terms of di erent evaluation metrics. This
suggests that the di erent modalities (visual and linguistic) for caption
generation should not be jointly encoded by the RNN; rather, the multi-
modal integration should be delayed to a subsequent stage. Furthermore,
this suggests that recurrent neural networks should not be viewed as actu-
ally generating text, but only as encoding it for prediction in a subsequent
layer.peer-reviewe
What is the role of recurrent neural networks (RNNs) in an image caption generator?
In neural image captioning systems, a recurrent neural network (RNN) is typically viewed as the primary `generation' component. This view suggests that the image features should be `injected' into the RNN. This is in fact the dominant view in the literature. Alternatively, the RNN can instead be viewed as only encoding the previously generated words. This view suggests that the RNN should only be used to encode linguistic features and that only the final representation should be `merged' with the image features at a later stage. This paper compares these two architectures. We find that, in general, late merging outperforms injection, suggesting that RNNs are better viewed as encoders, rather than generators.peer-reviewe
LCT-MALTAs submission to RepEval 2017 shared task
We present in this paper our team LCTMALTA’s
submission to the RepEval 2017
Shared Task on natural language inference.
Our system is a simple system
based on a standard BiLSTM architecture,
using as input GloVe word embeddings
augmented with further linguistic information.
We use max pooling on the
BiLSTM outputs to obtain embeddings for
sentences. On both the matched and the
mismatched test sets, our system clearly
beats the shared task’s BiLSTM baseline
model.peer-reviewe