Learning from changing tasks and sequential experience without forgetting the
obtained knowledge is a challenging problem for artificial neural networks. In
this work, we focus on two challenging problems in the paradigm of Continual
Learning (CL) without involving any old data: (i) the accumulation of
catastrophic forgetting caused by the gradually fading knowledge space from
which the model learns the previous knowledge; (ii) the uncontrolled tug-of-war
dynamics to balance the stability and plasticity during the learning of new
tasks. In order to tackle these problems, we present Progressive Learning
without Forgetting (PLwF) and a credit assignment regime in the optimizer. PLwF
densely introduces model functions from previous tasks to construct a knowledge
space such that it contains the most reliable knowledge on each task and the
distribution information of different tasks, while credit assignment controls
the tug-of-war dynamics by removing gradient conflict through projection.
Extensive ablative experiments demonstrate the effectiveness of PLwF and credit
assignment. In comparison with other CL methods, we report notably better
results even without relying on any raw data