Large-scale multi-modal contrastive learning frameworks like CLIP typically
require a large amount of image-text samples for training. However, these
samples are always collected continuously in real scenarios. This paper
discusses the feasibility of continual CLIP training using streaming data.
Unlike continual learning based on self-supervised learning methods for pure
images, which is empirically robust against catastrophic forgetting, CLIP's
performance degeneration in the continual setting is significant and
non-neglectable. By analyzing the changes in the model's representation space
during continual CLIP training from a spatial geometry perspective, we explore
and summarize these spatial variations as Spatial Disorder (SD), which can be
divided into Intra-modal Rotation and Inter-modal Deviation. Moreover, we
empirically and theoretically demonstrate how SD leads to a performance decline
for CLIP on cross-modal retrieval tasks. To alleviate SD, we propose a new
continual vision-language representation learning framework Mod-X: Maintain
off-diagonal information-matriX. By selectively aligning the off-diagonal
information distribution of contrastive matrices, the Mod-X improves the
capability of the multi-modal model by maintaining the multi-modal
representation space alignment on the old data domain during continuously
fitting the new training data domain. Experiments on commonly used datasets
with different scales and scopes have demonstrated the effectiveness of our
method