20,108 research outputs found
Training Multi-layer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation
Spiking neural networks (SNNs) have garnered a great amount of interest for
supervised and unsupervised learning applications. This paper deals with the
problem of training multi-layer feedforward SNNs. The non-linear
integrate-and-fire dynamics employed by spiking neurons make it difficult to
train SNNs to generate desired spike trains in response to a given input. To
tackle this, first the problem of training a multi-layer SNN is formulated as
an optimization problem such that its objective function is based on the
deviation in membrane potential rather than the spike arrival instants. Then,
an optimization method named Normalized Approximate Descent (NormAD),
hand-crafted for such non-convex optimization problems, is employed to derive
the iterative synaptic weight update rule. Next, it is reformulated to
efficiently train multi-layer SNNs, and is shown to be effectively performing
spatio-temporal error backpropagation. The learning rule is validated by
training -layer SNNs to solve a spike based formulation of the XOR problem
as well as training -layer SNNs for generic spike based training problems.
Thus, the new algorithm is a key step towards building deep spiking neural
networks capable of efficient event-triggered learning.Comment: 19 pages, 10 figure
Heterogeneous Recurrent Spiking Neural Network for Spatio-Temporal Classification
Spiking Neural Networks are often touted as brain-inspired learning models
for the third wave of Artificial Intelligence. Although recent SNNs trained
with supervised backpropagation show classification accuracy comparable to deep
networks, the performance of unsupervised learning-based SNNs remains much
lower. This paper presents a heterogeneous recurrent spiking neural network
(HRSNN) with unsupervised learning for spatio-temporal classification of video
activity recognition tasks on RGB (KTH, UCF11, UCF101) and event-based datasets
(DVS128 Gesture). The key novelty of the HRSNN is that the recurrent layer in
HRSNN consists of heterogeneous neurons with varying firing/relaxation
dynamics, and they are trained via heterogeneous
spike-time-dependent-plasticity (STDP) with varying learning dynamics for each
synapse. We show that this novel combination of heterogeneity in architecture
and learning method outperforms current homogeneous spiking neural networks. We
further show that HRSNN can achieve similar performance to state-of-the-art
backpropagation trained supervised SNN, but with less computation (fewer
neurons and sparse connection) and less training data.Comment: 32 pages, 11 Figures, 4 Tables. arXiv admin note: text overlap with
arXiv:1511.03198 by other author
Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Spiking neural networks (SNN) are artificial computational models that have
been inspired by the brain's ability to naturally encode and process
information in the time domain. The added temporal dimension is believed to
render them more computationally efficient than the conventional artificial
neural networks, though their full computational capabilities are yet to be
explored. Recently, computational memory architectures based on non-volatile
memory crossbar arrays have shown great promise to implement parallel
computations in artificial and spiking neural networks. In this work, we
experimentally demonstrate for the first time, the feasibility to realize
high-performance event-driven in-situ supervised learning systems using
nanoscale and stochastic phase-change synapses. Our SNN is trained to recognize
audio signals of alphabets encoded using spikes in the time domain and to
generate spike trains at precise time instances to represent the pixel
intensities of their corresponding images. Moreover, with a statistical model
capturing the experimental behavior of the devices, we investigate
architectural and systems-level solutions for improving the training and
inference performance of our computational memory-based system. Combining the
computational potential of supervised SNNs with the parallel compute power of
computational memory, the work paves the way for next-generation of efficient
brain-inspired systems
- …