652 research outputs found
Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization
Artificial autonomous agents and robots interacting in complex environments
are required to continually acquire and fine-tune knowledge over sustained
periods of time. The ability to learn from continuous streams of information is
referred to as lifelong learning and represents a long-standing challenge for
neural network models due to catastrophic forgetting. Computational models of
lifelong learning typically alleviate catastrophic forgetting in experimental
scenarios with given datasets of static images and limited complexity, thereby
differing significantly from the conditions artificial agents are exposed to.
In more natural settings, sequential information may become progressively
available over time and access to previous experience may be restricted. In
this paper, we propose a dual-memory self-organizing architecture for lifelong
learning scenarios. The architecture comprises two growing recurrent networks
with the complementary tasks of learning object instances (episodic memory) and
categories (semantic memory). Both growing networks can expand in response to
novel sensory experience: the episodic memory learns fine-grained
spatiotemporal representations of object instances in an unsupervised fashion
while the semantic memory uses task-relevant signals to regulate structural
plasticity levels and develop more compact representations from episodic
experience. For the consolidation of knowledge in the absence of external
sensory input, the episodic memory periodically replays trajectories of neural
reactivations. We evaluate the proposed model on the CORe50 benchmark dataset
for continuous object recognition, showing that we significantly outperform
current methods of lifelong learning in three different incremental learning
scenario
Online Continual Learning on Sequences
Online continual learning (OCL) refers to the ability of a system to learn
over time from a continuous stream of data without having to revisit previously
encountered training samples. Learning continually in a single data pass is
crucial for agents and robots operating in changing environments and required
to acquire, fine-tune, and transfer increasingly complex representations from
non-i.i.d. input distributions. Machine learning models that address OCL must
alleviate \textit{catastrophic forgetting} in which hidden representations are
disrupted or completely overwritten when learning from streams of novel input.
In this chapter, we summarize and discuss recent deep learning models that
address OCL on sequential input through the use (and combination) of synaptic
regularization, structural plasticity, and experience replay. Different
implementations of replay have been proposed that alleviate catastrophic
forgetting in connectionists architectures via the re-occurrence of (latent
representations of) input sequences and that functionally resemble mechanisms
of hippocampal replay in the mammalian brain. Empirical evidence shows that
architectures endowed with experience replay typically outperform architectures
without in (online) incremental learning tasks.Comment: L. Oneto et al. (eds.), Recent Trends in Learning From Data, Studies
in Computational Intelligence 89
- …