101 research outputs found
Learning a Recurrent Visual Representation for Image Caption Generation
In this paper we explore the bi-directional mapping between images and their
sentence-based descriptions. We propose learning this mapping using a recurrent
neural network. Unlike previous approaches that map both sentences and images
to a common embedding, we enable the generation of novel sentences given an
image. Using the same model, we can also reconstruct the visual features
associated with an image given its visual description. We use a novel recurrent
visual memory that automatically learns to remember long-term visual concepts
to aid in both sentence generation and visual feature reconstruction. We
evaluate our approach on several tasks. These include sentence generation,
sentence retrieval and image retrieval. State-of-the-art results are shown for
the task of generating novel image descriptions. When compared to human
generated captions, our automatically generated captions are preferred by
humans over of the time. Results are better than or comparable to
state-of-the-art results on the image and sentence retrieval tasks for methods
using similar visual features
- …