Existing multimodal machine translation (MMT) datasets consist of images and
video captions or instructional video subtitles, which rarely contain
linguistic ambiguity, making visual information ineffective in generating
appropriate translations. Recent work has constructed an ambiguous subtitles
dataset to alleviate this problem but is still limited to the problem that
videos do not necessarily contribute to disambiguation. We introduce EVA
(Extensive training set and Video-helpful evaluation set for Ambiguous
subtitles translation), an MMT dataset containing 852k Japanese-English (Ja-En)
parallel subtitle pairs, 520k Chinese-English (Zh-En) parallel subtitle pairs,
and corresponding video clips collected from movies and TV episodes. In
addition to the extensive training set, EVA contains a video-helpful evaluation
set in which subtitles are ambiguous, and videos are guaranteed helpful for
disambiguation. Furthermore, we propose SAFA, an MMT model based on the
Selective Attention model with two novel methods: Frame attention loss and
Ambiguity augmentation, aiming to use videos in EVA for disambiguation fully.
Experiments on EVA show that visual information and the proposed methods can
boost translation performance, and our model performs significantly better than
existing MMT models. The EVA dataset and the SAFA model are available at:
https://github.com/ku-nlp/video-helpful-MMT.git.Comment: Accepted by EMNLP 2023 Main Conference (long paper