Continual Learning with Gated Incremental Memories for sequential data processing

Abstract

The ability to learn in dynamic, nonstationary environments without forgetting previous knowledge, also known as Continual Learning (CL), is a key enabler for scalable and trustworthy deployments of adaptive solutions. While the importance of continual learning is largely acknowledged in machine vision and reinforcement learning problems, this is mostly under-documented for sequence processing tasks. This work proposes a Recurrent Neural Network (RNN) model for CL that is able to deal with concept drift in input distribution without forgetting previously acquired knowledge. We also implement and test a popular CL approach, Elastic Weight Consolidation (EWC), on top of two different types of RNNs. Finally, we compare the performances of our enhanced architecture against EWC and RNNs on a set of standard CL benchmarks, adapted to the sequential data processing scenario. Results show the superior performance of our architecture and highlight the need for special solutions designed to address CL in RNNs.Comment: Accepted as a conference paper at 2020 International Joint Conference on Neural Networks (IJCNN 2020). Part of 2020 IEEE World Congress on Computational Intelligence (IEEE WCCI 2020

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 11/08/2021