Using task-specific components within a neural network in continual learning
(CL) is a compelling strategy to address the stability-plasticity dilemma in
fixed-capacity models without access to past data. Current methods focus only
on selecting a sub-network for a new task that reduces forgetting of past
tasks. However, this selection could limit the forward transfer of relevant
past knowledge that helps in future learning. Our study reveals that satisfying
both objectives jointly is more challenging when a unified classifier is used
for all classes of seen tasks-class-Incremental Learning (class-IL)-as it is
prone to ambiguities between classes across tasks. Moreover, the challenge
increases when the semantic similarity of classes across tasks increases. To
address this challenge, we propose a new CL method, named AFAF, that aims to
Avoid Forgetting and Allow Forward transfer in class-IL using fix-capacity
models. AFAF allocates a sub-network that enables selective transfer of
relevant knowledge to a new task while preserving past knowledge, reusing some
of the previously allocated components to utilize the fixed-capacity, and
addressing class-ambiguities when similarities exist. The experiments show the
effectiveness of AFAF in providing models with multiple CL desirable
properties, while outperforming state-of-the-art methods on various challenging
benchmarks with different semantic similarities.Comment: Accepted at European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases (ECML PKDD 2022