1 research outputs found
Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors
Naively trained neural networks tend to experience catastrophic forgetting in
sequential task settings, where data from previous tasks are unavailable. A
number of methods, using various model expansion strategies, have been proposed
recently as possible solutions. However, determining how much to expand the
model is left to the practitioner, and often a constant schedule is chosen for
simplicity, regardless of how complex the incoming task is. Instead, we propose
a principled Bayesian nonparametric approach based on the Indian Buffet Process
(IBP) prior, letting the data determine how much to expand the model
complexity. We pair this with a factorization of the neural network's weight
matrices. Such an approach allows the number of factors of each weight matrix
to scale with the complexity of the task, while the IBP prior encourages sparse
weight factor selection and factor reuse, promoting positive knowledge transfer
between tasks. We demonstrate the effectiveness of our method on a number of
continual learning benchmarks and analyze how weight factors are allocated and
reused throughout the training.Comment: Proceedings of the 24th International Conference on Artificial
Intelligence and Statistics (AISTATS) 202