1 research outputs found
Task Agnostic Continual Learning Using Online Variational Bayes with Fixed-Point Updates
Background: Catastrophic forgetting is the notorious vulnerability of neural
networks to the changes in the data distribution during learning. This
phenomenon has long been considered a major obstacle for using learning agents
in realistic continual learning settings. A large body of continual learning
research assumes that task boundaries are known during training. However, only
a few works consider scenarios in which task boundaries are unknown or not well
defined -- task agnostic scenarios. The optimal Bayesian solution for this
requires an intractable online Bayes update to the weights posterior.
Contributions: We aim to approximate the online Bayes update as accurately as
possible. To do so, we derive novel fixed-point equations for the online
variational Bayes optimization problem, for multivariate Gaussian parametric
distributions. By iterating the posterior through these fixed-point equations,
we obtain an algorithm (FOO-VB) for continual learning which can handle
non-stationary data distribution using a fixed architecture and without using
external memory (i.e. without access to previous data). We demonstrate that our
method (FOO-VB) outperforms existing methods in task agnostic scenarios. FOO-VB
Pytorch implementation will be available online.Comment: The arXiv paper "Task Agnostic Continual Learning Using Online
Variational Bayes" is a preliminary pre-print of this paper. The main
differences between the versions are: 1. We develop new algorithmic framework
(FOO-VB). 2. We add multivariate Gaussian and matrix variate Gaussian
versions of the algorithm. 3. We demonstrate the new algorithm performance in
task agnostic scenario