1 research outputs found

    Does Continual Learning = Catastrophic Forgetting?

    Full text link
    Continual learning is known for suffering from catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we challenge the assumption that continual learning is inevitably associated with catastrophic forgetting by presenting a set of tasks that surprisingly do not suffer from catastrophic forgetting when learned continually. We provide evidence that these reconstruction-type tasks exhibit positive forward transfer and that single-view 3D shape reconstruction improves the performance on learned and novel categories over time. We provide the novel analysis of knowledge transfer ability by looking at the output distribution shift across sequential learning tasks. Finally, we show that the robustness of these tasks leads to the potential of having a proxy representation learning task for continual classification. The codebase, dataset, and pre-trained models released with this article can be found at https://github.com/rehg-lab/CLRec
    corecore