Vision-language pre-training (VLP) methods are blossoming recently, and its
crucial goal is to jointly learn visual and textual features via a
transformer-based architecture, demonstrating promising improvements on a
variety of vision-language tasks. Prior arts usually focus on how to align
visual and textual features, but strategies for improving the robustness of
model and speeding up model convergence are left insufficiently explored.
In this paper, we propose a novel method ViLTA, comprising of two components
to further facilitate the model to learn fine-grained representations among
image-text pairs. For Masked Language Modeling (MLM), we propose a
cross-distillation method to generate soft labels to enhance the robustness of
model, which alleviates the problem of treating synonyms of masked words as
negative samples in one-hot labels. For Image-Text Matching (ITM), we leverage
the current language encoder to synthesize hard negatives based on the context
of language input, encouraging the model to learn high-quality representations
by increasing the difficulty of the ITM task. By leveraging the above
techniques, our ViLTA can achieve better performance on various vision-language
tasks. Extensive experiments on benchmark datasets demonstrate that the
effectiveness of ViLTA and its promising potential for vision-language
pre-training.Comment: 15 pages, 5 figure