14,921 research outputs found
NPCL: Neural Processes for Uncertainty-Aware Continual Learning
Continual learning (CL) aims to train deep neural networks efficiently on
streaming data while limiting the forgetting caused by new tasks. However,
learning transferable knowledge with less interference between tasks is
difficult, and real-world deployment of CL models is limited by their inability
to measure predictive uncertainties. To address these issues, we propose
handling CL tasks with neural processes (NPs), a class of meta-learners that
encode different tasks into probabilistic distributions over functions all
while providing reliable uncertainty estimates. Specifically, we propose an
NP-based CL approach (NPCL) with task-specific modules arranged in a
hierarchical latent variable model. We tailor regularizers on the learned
latent distributions to alleviate forgetting. The uncertainty estimation
capabilities of the NPCL can also be used to handle the task head/module
inference challenge in CL. Our experiments show that the NPCL outperforms
previous CL approaches. We validate the effectiveness of uncertainty estimation
in the NPCL for identifying novel data and evaluating instance-level model
confidence. Code is available at \url{https://github.com/srvCodes/NPCL}.Comment: Accepted as a poster at NeurIPS 202
An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks
Catastrophic forgetting is a problem faced by many machine learning models
and algorithms. When trained on one task, then trained on a second task, many
machine learning models "forget" how to perform the first task. This is widely
believed to be a serious problem for neural networks. Here, we investigate the
extent to which the catastrophic forgetting problem occurs for modern neural
networks, comparing both established and recent gradient-based training
algorithms and activation functions. We also examine the effect of the
relationship between the first task and the second task on catastrophic
forgetting. We find that it is always best to train using the dropout
algorithm--the dropout algorithm is consistently best at adapting to the new
task, remembering the old task, and has the best tradeoff curve between these
two extremes. We find that different tasks and relationships between tasks
result in very different rankings of activation function performance. This
suggests the choice of activation function should always be cross-validated
- …