2 research outputs found
CAPTION: Correction by Analyses, POS-Tagging and Interpretation of Objects using only Nouns
Recently, Deep Learning (DL) methods have shown an excellent performance in
image captioning and visual question answering. However, despite their
performance, DL methods do not learn the semantics of the words that are being
used to describe a scene, making it difficult to spot incorrect words used in
captions or to interchange words that have similar meanings. This work proposes
a combination of DL methods for object detection and natural language
processing to validate image's captions. We test our method in the FOIL-COCO
data set, since it provides correct and incorrect captions for various images
using only objects represented in the MS-COCO image data set. Results show that
our method has a good overall performance, in some cases similar to the human
performance.Comment: Published at the First Annual International Workshop on
Interpretability: Methodologies and algorithms (IMA 2019
On Architectures for Including Visual Information in Neural Language Models for Image Description
A neural language model can be conditioned into generating descriptions for
images by providing visual information apart from the sentence prefix. This
visual information can be included into the language model through different
points of entry resulting in different neural architectures. We identify four
main architectures which we call init-inject, pre-inject, par-inject, and
merge.
We analyse these four architectures and conclude that the best performing one
is init-inject, which is when the visual information is injected into the
initial state of the recurrent neural network. We confirm this using both
automatic evaluation measures and human annotation.
We then analyse how much influence the images have on each architecture. This
is done by measuring how different the output probabilities of a model are when
a partial sentence is combined with a completely different image from the one
it is meant to be combined with. We find that init-inject tends to quickly
become less influenced by the image as more words are generated. A different
architecture called merge, which is when the visual information is merged with
the recurrent neural network's hidden state vector prior to output, loses
visual influence much more slowly, suggesting that it would work better for
generating longer sentences.
We also observe that the merge architecture can have its recurrent neural
network pre-trained in a text-only language model (transfer learning) rather
than be initialised randomly as usual. This results in even better performance
than the other architectures, provided that the source language model is not
too good at language modelling or it will overspecialise and be less effective
at image description generation.
Our work opens up new avenues of research in neural architectures,
explainable AI, and transfer learning.Comment: 145 pages, 41 figures, 15 tables, Doctoral thesi