Despite being (pre)trained on a massive amount of data, state-of-the-art
video-language alignment models are not robust to semantically-plausible
contrastive changes in the video captions. Our work addresses this by
identifying a broad spectrum of contrast misalignments, such as replacing
entities, actions, and flipping event order, which alignment models should be
robust against. To this end, we introduce the VideoCon, a video-language
alignment dataset constructed by a large language model that generates
plausible contrast video captions and explanations for differences between
original and contrast video captions. Then, a generative video-language model
is finetuned with VideoCon to assess video-language entailment and generate
explanations. Our VideoCon-based alignment model significantly outperforms
current models. It exhibits a 12-point increase in AUC for the video-language
alignment task on human-generated contrast captions. Finally, our model sets
new state of the art zero-shot performance in temporally-extensive
video-language tasks such as text-to-video retrieval (SSv2-Temporal) and video
question answering (ATP-Hard). Moreover, our model shows superior performance
on novel videos and human-crafted captions and explanations. Our code and data
are available at https://github.com/Hritikbansal/videocon.Comment: 22 pages, 19 Figures, 7 Table