14 research outputs found
Less-forgetful Learning for Domain Expansion in Deep Neural Networks
Expanding the domain that deep neural network has already learned without
accessing old domain data is a challenging task because deep neural networks
forget previously learned information when learning new data from a new domain.
In this paper, we propose a less-forgetful learning method for the domain
expansion scenario. While existing domain adaptation techniques solely focused
on adapting to new domains, the proposed technique focuses on working well with
both old and new domains without needing to know whether the input is from the
old or new domain. First, we present two naive approaches which will be
problematic, then we provide a new method using two proposed properties for
less-forgetful learning. Finally, we prove the effectiveness of our method
through experiments on image classification tasks. All datasets used in the
paper, will be released on our website for someone's follow-up study.Comment: 8 pages, accepted to AAAI 201
RTRA: Rapid Training of Regularization-based Approaches in Continual Learning
Catastrophic forgetting(CF) is a significant challenge in continual learning
(CL). In regularization-based approaches to mitigate CF, modifications to
important training parameters are penalized in subsequent tasks using an
appropriate loss function. We propose the RTRA, a modification to the widely
used Elastic Weight Consolidation (EWC) regularization scheme, using the
Natural Gradient for loss function optimization. Our approach improves the
training of regularization-based methods without sacrificing test-data
performance. We compare the proposed RTRA approach against EWC using the
iFood251 dataset. We show that RTRA has a clear edge over the state-of-the-art
approaches