Class-incremental learning for semantic segmentation (CiSS) is presently a
highly researched field which aims at updating a semantic segmentation model by
sequentially learning new semantic classes. A major challenge in CiSS is
overcoming the effects of catastrophic forgetting, which describes the sudden
drop of accuracy on previously learned classes after the model is trained on a
new set of classes. Despite latest advances in mitigating catastrophic
forgetting, the underlying causes of forgetting specifically in CiSS are not
well understood. Therefore, in a set of experiments and representational
analyses, we demonstrate that the semantic shift of the background class and a
bias towards new classes are the major causes of forgetting in CiSS.
Furthermore, we show that both causes mostly manifest themselves in deeper
classification layers of the network, while the early layers of the model are
not affected. Finally, we demonstrate how both causes are effectively mitigated
utilizing the information contained in the background, with the help of
knowledge distillation and an unbiased cross-entropy loss.Comment: currently under revie