Cross-lingual cross-modal retrieval has garnered increasing attention
recently, which aims to achieve the alignment between vision and target
language (V-T) without using any annotated V-T data pairs. Current methods
employ machine translation (MT) to construct pseudo-parallel data pairs, which
are then used to learn a multi-lingual and multi-modal embedding space that
aligns visual and target-language representations. However, the large
heterogeneous gap between vision and text, along with the noise present in
target language translations, poses significant challenges in effectively
aligning their representations. To address these challenges, we propose a
general framework, Cross-Lingual to Cross-Modal (CL2CM), which improves the
alignment between vision and target language using cross-lingual transfer. This
approach allows us to fully leverage the merits of multi-lingual pre-trained
models (e.g., mBERT) and the benefits of the same modality structure, i.e.,
smaller gap, to provide reliable and comprehensive semantic correspondence
(knowledge) for the cross-modal network. We evaluate our proposed approach on
two multilingual image-text datasets, Multi30K and MSCOCO, and one video-text
dataset, VATEX. The results clearly demonstrate the effectiveness of our
proposed method and its high potential for large-scale retrieval.Comment: Accepted by AAAI202