Contrastive Language-Image Pre-training (CLIP) has significantly boosted the
performance of various vision-language tasks by scaling up the dataset with
image-text pairs collected from the web. However, the presence of intrinsic
noise and unmatched image-text pairs in web data can potentially affect the
performance of representation learning. To address this issue, we first utilize
the OFA model to generate synthetic captions that focus on the image content.
The generated captions contain complementary information that is beneficial for
pre-training. Then, we propose an Adaptive Language-Image Pre-training (ALIP),
a bi-path model that integrates supervision from both raw text and synthetic
caption. As the core components of ALIP, the Language Consistency Gate (LCG)
and Description Consistency Gate (DCG) dynamically adjust the weights of
samples and image-text/caption pairs during the training process. Meanwhile,
the adaptive contrastive loss can effectively reduce the impact of noise data
and enhances the efficiency of pre-training data. We validate ALIP with
experiments on different scales of models and pre-training datasets.
Experiments results show that ALIP achieves state-of-the-art performance on
multiple downstream tasks including zero-shot image-text retrieval and linear
probe. To facilitate future research, the code and pre-trained models are
released at https://github.com/deepglint/ALIP.Comment: 15pages, 10figures, ICCV202