Video-and-language pre-training has shown promising results for learning
generalizable representations. Most existing approaches usually model video and
text in an implicit manner, without considering explicit structural
representations of the multi-modal content. We denote such form of
representations as structural knowledge, which express rich semantics of
multiple granularities. There are related works that propose object-aware
approaches to inject similar knowledge as inputs. However, the existing methods
usually fail to effectively utilize such knowledge as regularizations to shape
a superior cross-modal representation space. To this end, we propose a
Cross-modaL knOwledge-enhanced Pre-training (CLOP) method with Knowledge
Regularizations. There are two key designs of ours: 1) a simple yet effective
Structural Knowledge Prediction (SKP) task to pull together the latent
representations of similar videos; and 2) a novel Knowledge-guided sampling
approach for Contrastive Learning (KCL) to push apart cross-modal hard negative
samples. We evaluate our method on four text-video retrieval tasks and one
multi-choice QA task. The experiments show clear improvements, outperforming
prior works by a substantial margin. Besides, we provide ablations and insights
of how our methods affect the latent representation space, demonstrating the
value of incorporating knowledge regularizations into video-and-language
pre-training.Comment: ACM Multimedia 2022 (MM'22