14,217 research outputs found
Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization
Artificial autonomous agents and robots interacting in complex environments
are required to continually acquire and fine-tune knowledge over sustained
periods of time. The ability to learn from continuous streams of information is
referred to as lifelong learning and represents a long-standing challenge for
neural network models due to catastrophic forgetting. Computational models of
lifelong learning typically alleviate catastrophic forgetting in experimental
scenarios with given datasets of static images and limited complexity, thereby
differing significantly from the conditions artificial agents are exposed to.
In more natural settings, sequential information may become progressively
available over time and access to previous experience may be restricted. In
this paper, we propose a dual-memory self-organizing architecture for lifelong
learning scenarios. The architecture comprises two growing recurrent networks
with the complementary tasks of learning object instances (episodic memory) and
categories (semantic memory). Both growing networks can expand in response to
novel sensory experience: the episodic memory learns fine-grained
spatiotemporal representations of object instances in an unsupervised fashion
while the semantic memory uses task-relevant signals to regulate structural
plasticity levels and develop more compact representations from episodic
experience. For the consolidation of knowledge in the absence of external
sensory input, the episodic memory periodically replays trajectories of neural
reactivations. We evaluate the proposed model on the CORe50 benchmark dataset
for continuous object recognition, showing that we significantly outperform
current methods of lifelong learning in three different incremental learning
scenario
Thalamo-cortical spiking model of incremental learning combining perception, context and NREM-sleep
The brain exhibits capabilities of fast incremental learning from few noisy examples, as well as the ability to associate similar memories in autonomously-created categories and to combine contextual hints with sensory perceptions. Together with sleep, these mechanisms are thought to be key components of many high-level cognitive functions. Yet, little is known about the underlying processes and the specific roles of different brain states. In this work, we exploited the combination of context and perception in a thalamo-cortical model based on a soft winner-take-all circuit of excitatory and inhibitory spiking neurons. After calibrating this model to express awake and deep-sleep states with features comparable with biological measures, we demonstrate the model capability of fast incremental learning from few examples, its resilience when proposed with noisy perceptions and contextual signals, and an improvement in visual classification after sleep due to induced synaptic homeostasis and association of similar memories
SHARP: Sparsity and Hidden Activation RePlay for Neuro-Inspired Continual Learning
Deep neural networks (DNNs) struggle to learn in dynamic environments since
they rely on fixed datasets or stationary environments. Continual learning (CL)
aims to address this limitation and enable DNNs to accumulate knowledge
incrementally, similar to human learning. Inspired by how our brain
consolidates memories, a powerful strategy in CL is replay, which involves
training the DNN on a mixture of new and all seen classes. However, existing
replay methods overlook two crucial aspects of biological replay: 1) the brain
replays processed neural patterns instead of raw input, and 2) it prioritizes
the replay of recently learned information rather than revisiting all past
experiences. To address these differences, we propose SHARP, an efficient
neuro-inspired CL method that leverages sparse dynamic connectivity and
activation replay. Unlike other activation replay methods, which assume layers
not subjected to replay have been pretrained and fixed, SHARP can continually
update all layers. Also, SHARP is unique in that it only needs to replay few
recently seen classes instead of all past classes. Our experiments on five
datasets demonstrate that SHARP outperforms state-of-the-art replay methods in
class incremental learning. Furthermore, we showcase SHARP's flexibility in a
novel CL scenario where the boundaries between learning episodes are blurry.
The SHARP code is available at
\url{https://github.com/BurakGurbuz97/SHARP-Continual-Learning}
Self-Synchronization in Duty-cycled Internet of Things (IoT) Applications
In recent years, the networks of low-power devices have gained popularity.
Typically these devices are wireless and interact to form large networks such
as the Machine to Machine (M2M) networks, Internet of Things (IoT), Wearable
Computing, and Wireless Sensor Networks. The collaboration among these devices
is a key to achieving the full potential of these networks. A major problem in
this field is to guarantee robust communication between elements while keeping
the whole network energy efficient. In this paper, we introduce an extended and
improved emergent broadcast slot (EBS) scheme, which facilitates collaboration
for robust communication and is energy efficient. In the EBS, nodes
communication unit remains in sleeping mode and are awake just to communicate.
The EBS scheme is fully decentralized, that is, nodes coordinate their wake-up
window in partially overlapped manner within each duty-cycle to avoid message
collisions. We show the theoretical convergence behavior of the scheme, which
is confirmed through real test-bed experimentation.Comment: 12 Pages, 11 Figures, Journa
- …