7,014 research outputs found
Adaptive Reorganization of Neural Pathways for Continual Learning with Spiking Neural Networks
The human brain can self-organize rich and diverse sparse neural pathways to
incrementally master hundreds of cognitive tasks. However, most existing
continual learning algorithms for deep artificial and spiking neural networks
are unable to adequately auto-regulate the limited resources in the network,
which leads to performance drop along with energy consumption rise as the
increase of tasks. In this paper, we propose a brain-inspired continual
learning algorithm with adaptive reorganization of neural pathways, which
employs Self-Organizing Regulation networks to reorganize the single and
limited Spiking Neural Network (SOR-SNN) into rich sparse neural pathways to
efficiently cope with incremental tasks. The proposed model demonstrates
consistent superiority in performance, energy consumption, and memory capacity
on diverse continual learning tasks ranging from child-like simple to complex
tasks, as well as on generalized CIFAR100 and ImageNet datasets. In particular,
the SOR-SNN model excels at learning more complex tasks as well as more tasks,
and is able to integrate the past learned knowledge with the information from
the current task, showing the backward transfer ability to facilitate the old
tasks. Meanwhile, the proposed model exhibits self-repairing ability to
irreversible damage and for pruned networks, could automatically allocate new
pathway from the retained network to recover memory for forgotten knowledge
Spiking neurons with short-term synaptic plasticity form superior generative networks
Spiking networks that perform probabilistic inference have been proposed both
as models of cortical computation and as candidates for solving problems in
machine learning. However, the evidence for spike-based computation being in
any way superior to non-spiking alternatives remains scarce. We propose that
short-term plasticity can provide spiking networks with distinct computational
advantages compared to their classical counterparts. In this work, we use
networks of leaky integrate-and-fire neurons that are trained to perform both
discriminative and generative tasks in their forward and backward information
processing paths, respectively. During training, the energy landscape
associated with their dynamics becomes highly diverse, with deep attractor
basins separated by high barriers. Classical algorithms solve this problem by
employing various tempering techniques, which are both computationally
demanding and require global state updates. We demonstrate how similar results
can be achieved in spiking networks endowed with local short-term synaptic
plasticity. Additionally, we discuss how these networks can even outperform
tempering-based approaches when the training data is imbalanced. We thereby
show how biologically inspired, local, spike-triggered synaptic dynamics based
simply on a limited pool of synaptic resources can allow spiking networks to
outperform their non-spiking relatives.Comment: corrected typo in abstrac
Role of homeostasis in learning sparse representations
Neurons in the input layer of primary visual cortex in primates develop
edge-like receptive fields. One approach to understanding the emergence of this
response is to state that neural activity has to efficiently represent sensory
data with respect to the statistics of natural scenes. Furthermore, it is
believed that such an efficient coding is achieved using a competition across
neurons so as to generate a sparse representation, that is, where a relatively
small number of neurons are simultaneously active. Indeed, different models of
sparse coding, coupled with Hebbian learning and homeostasis, have been
proposed that successfully match the observed emergent response. However, the
specific role of homeostasis in learning such sparse representations is still
largely unknown. By quantitatively assessing the efficiency of the neural
representation during learning, we derive a cooperative homeostasis mechanism
that optimally tunes the competition between neurons within the sparse coding
algorithm. We apply this homeostasis while learning small patches taken from
natural images and compare its efficiency with state-of-the-art algorithms.
Results show that while different sparse coding algorithms give similar coding
results, the homeostasis provides an optimal balance for the representation of
natural images within the population of neurons. Competition in sparse coding
is optimized when it is fair. By contributing to optimizing statistical
competition across neurons, homeostasis is crucial in providing a more
efficient solution to the emergence of independent components
- …