7 research outputs found
RTRA: Rapid Training of Regularization-based Approaches in Continual Learning
Catastrophic forgetting(CF) is a significant challenge in continual learning
(CL). In regularization-based approaches to mitigate CF, modifications to
important training parameters are penalized in subsequent tasks using an
appropriate loss function. We propose the RTRA, a modification to the widely
used Elastic Weight Consolidation (EWC) regularization scheme, using the
Natural Gradient for loss function optimization. Our approach improves the
training of regularization-based methods without sacrificing test-data
performance. We compare the proposed RTRA approach against EWC using the
iFood251 dataset. We show that RTRA has a clear edge over the state-of-the-art
approaches
Analysis of stochastic gradient descent in continuous time
Stochastic gradient descent is an optimisation method that combines classical
gradient descent with random subsampling within the target functional. In this
work, we introduce the stochastic gradient process as a continuous-time
representation of stochastic gradient descent. The stochastic gradient process
is a dynamical system that is coupled with a continuous-time Markov process
living on a finite state space. The dynamical system -- a gradient flow --
represents the gradient descent part, the process on the finite state space
represents the random subsampling. Processes of this type are, for instance,
used to model clonal populations in fluctuating environments. After introducing
it, we study theoretical properties of the stochastic gradient process: We show
that it converges weakly to the gradient flow with respect to the full target
function, as the learning rate approaches zero. We give conditions under which
the stochastic gradient process with constant learning rate is exponentially
ergodic in the Wasserstein sense. Then we study the case, where the learning
rate goes to zero sufficiently slowly and the single target functions are
strongly convex. In this case, the process converges weakly to the point mass
concentrated in the global minimum of the full target function; indicating
consistency of the method. We conclude after a discussion of discretisation
strategies for the stochastic gradient process and numerical experiments