Online Lifelong Learning (OLL) addresses the challenge of learning from
continuous and non-stationary data streams. Existing online lifelong learning
methods based on image classification models often require preset conditions
such as the total number of classes or maximum memory capacity, which hinders
the realization of real never-ending learning and renders them impractical for
real-world scenarios. In this work, we propose that vision-language models,
such as Contrastive Language-Image Pretraining (CLIP), are more suitable
candidates for online lifelong learning. We discover that maintaining symmetry
between image and text is crucial during Parameter-Efficient Tuning (PET) for
CLIP model in online lifelong learning. To this end, we introduce the Symmetric
Image-Text (SIT) tuning strategy. We conduct extensive experiments on multiple
lifelong learning benchmark datasets and elucidate the effectiveness of SIT
through gradient analysis. Additionally, we assess the impact of lifelong
learning on generalizability of CLIP and found that tuning the image encoder is
beneficial for lifelong learning, while tuning the text encoder aids in
zero-shot learning