385 research outputs found
Fixed Random Classifier Rearrangement for Continual Learning
With the explosive growth of data, continual learning capability is
increasingly important for neural networks. Due to catastrophic forgetting,
neural networks inevitably forget the knowledge of old tasks after learning new
ones. In visual classification scenario, a common practice of alleviating the
forgetting is to constrain the backbone. However, the impact of classifiers is
underestimated. In this paper, we analyze the variation of model predictions in
sequential binary classification tasks and find that the norm of the equivalent
one-class classifiers significantly affects the forgetting level. Based on this
conclusion, we propose a two-stage continual learning algorithm named Fixed
Random Classifier Rearrangement (FRCR). In first stage, FRCR replaces the
learnable classifiers with fixed random classifiers, constraining the norm of
the equivalent one-class classifiers without affecting the performance of the
network. In second stage, FRCR rearranges the entries of new classifiers to
implicitly reduce the drift of old latent representations. The experimental
results on multiple datasets show that FRCR significantly mitigates the model
forgetting; subsequent experimental analyses further validate the effectiveness
of the algorithm
- …