Continual learning is the ability to sequentially learn over time by
accommodating knowledge while retaining previously learned experiences. Neural
networks can learn multiple tasks when trained on them jointly, but cannot
maintain performance on previously learned tasks when tasks are presented one
at a time. This problem is called catastrophic forgetting. In this work, we
propose a classification model that learns continuously from sequentially
observed tasks, while preventing catastrophic forgetting. We build on the
lifelong generative capabilities of [10] and extend it to the classification
setting by deriving a new variational bound on the joint log likelihood, logp(x;y).Comment: 5 pages, 4 figures, under review in Continual learning Workshop NIPS
201