857 research outputs found
Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks
Spiking neural networks (SNNs) are well suited for spatio-temporal learning
and implementations on energy-efficient event-driven neuromorphic processors.
However, existing SNN error backpropagation (BP) methods lack proper handling
of spiking discontinuities and suffer from low performance compared with the BP
methods for traditional artificial neural networks. In addition, a large number
of time steps are typically required to achieve decent performance, leading to
high latency and rendering spike-based computation unscalable to deep
architectures. We present a novel Temporal Spike Sequence Learning
Backpropagation (TSSL-BP) method for training deep SNNs, which breaks down
error backpropagation across two types of inter-neuron and intra-neuron
dependencies and leads to improved temporal learning precision. It captures
inter-neuron dependencies through presynaptic firing times by considering the
all-or-none characteristics of firing activities and captures intra-neuron
dependencies by handling the internal evolution of each neuronal state in time.
TSSL-BP efficiently trains deep SNNs within a much shortened temporal window of
a few steps while improving the accuracy for various image classification
datasets including CIFAR10.Comment: Accepted for spotlight presentation of NeurIPS (Neural Information
Processing System) 2020:
https://proceedings.neurips.cc/paper/2020/hash/8bdb5058376143fa358981954e7626b8-Abstract.htm
Training Multi-layer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation
Spiking neural networks (SNNs) have garnered a great amount of interest for
supervised and unsupervised learning applications. This paper deals with the
problem of training multi-layer feedforward SNNs. The non-linear
integrate-and-fire dynamics employed by spiking neurons make it difficult to
train SNNs to generate desired spike trains in response to a given input. To
tackle this, first the problem of training a multi-layer SNN is formulated as
an optimization problem such that its objective function is based on the
deviation in membrane potential rather than the spike arrival instants. Then,
an optimization method named Normalized Approximate Descent (NormAD),
hand-crafted for such non-convex optimization problems, is employed to derive
the iterative synaptic weight update rule. Next, it is reformulated to
efficiently train multi-layer SNNs, and is shown to be effectively performing
spatio-temporal error backpropagation. The learning rule is validated by
training -layer SNNs to solve a spike based formulation of the XOR problem
as well as training -layer SNNs for generic spike based training problems.
Thus, the new algorithm is a key step towards building deep spiking neural
networks capable of efficient event-triggered learning.Comment: 19 pages, 10 figure
Synthesis of neural networks for spatio-temporal spike pattern recognition and processing
The advent of large scale neural computational platforms has highlighted the
lack of algorithms for synthesis of neural structures to perform predefined
cognitive tasks. The Neural Engineering Framework offers one such synthesis,
but it is most effective for a spike rate representation of neural information,
and it requires a large number of neurons to implement simple functions. We
describe a neural network synthesis method that generates synaptic connectivity
for neurons which process time-encoded neural signals, and which makes very
sparse use of neurons. The method allows the user to specify, arbitrarily,
neuronal characteristics such as axonal and dendritic delays, and synaptic
transfer functions, and then solves for the optimal input-output relationship
using computed dendritic weights. The method may be used for batch or online
learning and has an extremely fast optimization process. We demonstrate its use
in generating a network to recognize speech which is sparsely encoded as spike
times.Comment: In submission to Frontiers in Neuromorphic Engineerin
SLSSNN: High energy efficiency spike-train level spiking neural networks with spatio-temporal conversion
Brain-inspired spiking neuron networks (SNNs) have attracted widespread
research interest due to their low power features, high biological
plausibility, and strong spatiotemporal information processing capability.
Although adopting a surrogate gradient (SG) makes the non-differentiability SNN
trainable, achieving comparable accuracy for ANNs and keeping low-power
features simultaneously is still tricky. In this paper, we proposed an
energy-efficient spike-train level spiking neural network (SLSSNN) with low
computational cost and high accuracy. In the SLSSNN, spatio-temporal conversion
blocks (STCBs) are applied to replace the convolutional and ReLU layers to keep
the low power features of SNNs and improve accuracy. However, SLSSNN cannot
adopt backpropagation algorithms directly due to the non-differentiability
nature of spike trains. We proposed a suitable learning rule for SLSSNNs by
deducing the equivalent gradient of STCB. We evaluate the proposed SLSSNN on
static and neuromorphic datasets, including Fashion-Mnist, Cifar10, Cifar100,
TinyImageNet, and DVS-Cifar10. The experiment results show that our proposed
SLSSNN outperforms the state-of-the-art accuracy on nearly all datasets, using
fewer time steps and being highly energy-efficient
- …