10 research outputs found
Spiking Neural Networks and Sparse Deep Learning
This document proposes new methods for training multi-layer and deep spiking neural networks (SNNs), specifically, spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly used backpropagation method for non-spiking networks is not easily applied. Our methods use novel versions of the brain-like, local learning rule named spike-timing-dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and converts them to spatio-temporally local rules suited for SNNs. The training uses two components for unsupervised feature extraction and supervised classification. The first component refers to new STDP rules for spike-based representation learning that trains convolutional filters and initial representations. The second introduces new STDP-based supervised learning rules for spike pattern classification via an approximation to gradient descent by combining the STDP and anti-STDP rules. Specifically, the STDP-based supervised learning model approximates gradient descent by using temporally local STDP rules. Stacking these components implements a novel sparse, spiking deep learning model. Our spiking deep learning model is categorized as a variation of spiking CNNs of integrate-and-fire (IF) neurons with performance comparable with the state-of-the-art deep SNNs. The experimental results show the success of the proposed model for image classification. Our network architecture is the only spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework
Representation learning using event-based STDP
Although representation learning methods developed within the framework of
traditional neural networks are relatively mature, developing a spiking
representation model remains a challenging problem. This paper proposes an
event-based method to train a feedforward spiking neural network (SNN) layer
for extracting visual features. The method introduces a novel
spike-timing-dependent plasticity (STDP) learning rule and a threshold
adjustment rule both derived from a vector quantization-like objective function
subject to a sparsity constraint. The STDP rule is obtained by the gradient of
a vector quantization criterion that is converted to spike-based,
spatio-temporally local update rules in a spiking network of leaky,
integrate-and-fire (LIF) neurons. Independence and sparsity of the model are
achieved by the threshold adjustment rule and by a softmax function
implementing inhibition in the representation layer consisting of
WTA-thresholded spiking neurons. Together, these mechanisms implement a form of
spike-based, competitive learning. Two sets of experiments are performed on the
MNIST and natural image datasets. The results demonstrate a sparse spiking
visual representation model with low reconstruction loss comparable with
state-of-the-art visual coding approaches, yet our rule is local in both time
and space, thus biologically plausible and hardware friendly
Deep learning in spiking neural networks
International audienceIn recent years, deep learning has revolutionized the field of machine learning, for computer vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is trained, most often in a supervised manner using backpropagation. Vast amounts of labeled training examples are required, but the resulting classification accuracy is truly impressive, sometimes outperforming humans.Neurons in an ANN are characterized by a single, static, continuous-valued activation. Yet biological neurons use discrete spikes to compute and transmit information, and the spike times, in addition to the spike rates, matter. Spiking neural networks (SNNs) are thus more biologically realistic than ANNs, and are arguably the only viable option if one wants to understand how the brain computes at the neuronal description level. The spikes of biological neurons are sparse in time and space, and event-driven. Combined with bio-plausible local learning rules, this makes it easier to build low-power, neuromorphic hardware for SNNs. However, training deep SNNs remains a challenge. Spiking neurons’ transfer function is usually non-differentiable, which prevents using backpropagation.Here we review recent supervised and unsupervised methods to train deep SNNs, and compare them in terms of accuracy and computational cost. The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNNs typically require many fewer operations and are the better candidates to process spatio-temporal data