2 research outputs found
MultiSubs: A Large-scale Multimodal and Multilingual Dataset
This paper introduces a large-scale multimodal and multilingual dataset that
aims to facilitate research on grounding words to images in their contextual
usage in language. The dataset consists of images selected to unambiguously
illustrate concepts expressed in sentences from movie subtitles. The dataset is
a valuable resource as (i) the images are aligned to text fragments rather than
whole sentences; (ii) multiple images are possible for a text fragment and a
sentence; (iii) the sentences are free-form and real-world like; (iv) the
parallel texts are multilingual. We set up a fill-in-the-blank game for humans
to evaluate the quality of the automatic image selection process of our
dataset. We show the utility of the dataset on two automatic tasks: (i)
fill-in-the blank; (ii) lexical translation. Results of the human evaluation
and automatic models demonstrate that images can be a useful complement to the
textual context. The dataset will benefit research on visual grounding of words
especially in the context of free-form sentences, and can be obtained from
https://doi.org/10.5281/zenodo.5034604 under a Creative Commons licence.Comment: Manuscript update: (i) Added links to the dataset and evaluation
toolkit; (ii) Section 6.1.4: Added random and n-gram baselines to the
fill-in-the-blank task, and added further discussion at the end of the
section; (iii) Section 6.2.3: Further elaboration on the ALI metric; (iv)
Section 6.2.4: Corrected results for the lexical translation task (Table 8),
and updated the discussions accordingl