Continual learning (CL) aims to constantly learn new knowledge over time
while avoiding catastrophic forgetting on old tasks. We focus on continual text
classification under the class-incremental setting. Recent CL studies have
identified the severe performance decrease on analogous classes as a key factor
for catastrophic forgetting. In this paper, through an in-depth exploration of
the representation learning process in CL, we discover that the compression
effect of the information bottleneck leads to confusion on analogous classes.
To enable the model learn more sufficient representations, we propose a novel
replay-based continual text classification method, InfoCL. Our approach
utilizes fast-slow and current-past contrastive learning to perform mutual
information maximization and better recover the previously learned
representations. In addition, InfoCL incorporates an adversarial memory
augmentation strategy to alleviate the overfitting problem of replay.
Experimental results demonstrate that InfoCL effectively mitigates forgetting
and achieves state-of-the-art performance on three text classification tasks.
The code is publicly available at https://github.com/Yifan-Song793/InfoCL.Comment: Findings of EMNLP 2023. An improved version of arXiv:2305.0728