3,188 research outputs found
Smooth Exact Gradient Descent Learning in Spiking Neural Networks
Artificial neural networks are highly successfully trained with
backpropagation. For spiking neural networks, however, a similar gradient
descent scheme seems prohibitive due to the sudden, disruptive (dis-)appearance
of spikes. Here, we demonstrate exact gradient descent learning based on
spiking dynamics that change only continuously. These are generated by neuron
models whose spikes vanish and appear at the end of a trial, where they do not
influence other neurons anymore. This also enables gradient-based spike
addition and removal. We apply our learning scheme to induce and continuously
move spikes to desired times, in single neurons and recurrent networks.
Further, it achieves competitive performance in a benchmark task using deep,
initially silent networks. Our results show how non-disruptive learning is
possible despite discrete spikes
Feature Extraction using Spiking Convolutional Neural Networks
Spiking neural networks are biologically plausible counterparts of the artificial neural networks, artificial neural networks are usually trained with stochastic gradient descent and spiking neural networks are trained with spike timing dependant plasticity. Training deep convolutional neural networks is a memory and power intensive job. Spiking networks could potentially help in reducing the power usage. There is a large pool of tools for one to chose to train artificial neural networks of any size, on the other hand all the available tools to simulate spiking neural networks are geared towards computational neuroscience applications and they are not suitable for real life applications. In this work we focus on implementing a spiking CNN using Tensorflow to examine behaviour of the network and study catastrophic forgetting in the spiking CNN and weight initialization problem in R-STDP using MNIST data set. We also report classification accuracies that are achieved using N-MNIST and MNIST data sets
Error-triggered Three-Factor Learning Dynamics for Crossbar Arrays
Recent breakthroughs suggest that local, approximate gradient descent
learning is compatible with Spiking Neural Networks (SNNs). Although SNNs can
be scalably implemented using neuromorphic VLSI, an architecture that can learn
in-situ as accurately as conventional processors is still missing. Here, we
propose a subthreshold circuit architecture designed through insights obtained
from machine learning and computational neuroscience that could achieve such
accuracy. Using a surrogate gradient learning framework, we derive local,
error-triggered learning dynamics compatible with crossbar arrays and the
temporal dynamics of SNNs. The derivation reveals that circuits used for
inference and training dynamics can be shared, which simplifies the circuit and
suppresses the effects of fabrication mismatch. We present SPICE simulations on
XFAB 180nm process, as well as large-scale simulations of the spiking neural
networks on event-based benchmarks, including a gesture recognition task. Our
results show that the number of updates can be reduced hundred-fold compared to
the standard rule while achieving performances that are on par with the
state-of-the-art
A Deep Unsupervised Feature Learning Spiking Neural Network with Binarized Classification Layers for the EMNIST Classification
End user AI is trained on large server farms with data collected from the users. With ever increasing demand for IoT devices, there is a need for deep learning approaches that can be implemented at the Edge in an energy efficient manner. In this work we approach this using spiking neural networks. The unsupervised learning technique of spike timing dependent plasticity (STDP) and binary activations are used to extract features from spiking input data. Gradient descent (backpropagation) is used only on the output layer to perform training for classification. The accuracies obtained for the balanced EMNIST data set compare favorably with other approaches. The effect of the stochastic gradient descent (SGD) approximations on learning capabilities of our network are also explored
Training Spiking Neural Networks Using Lessons From Deep Learning
The brain is the perfect place to look for inspiration to develop more
efficient neural networks. The inner workings of our synapses and neurons
provide a glimpse at what the future of deep learning might look like. This
paper serves as a tutorial and perspective showing how to apply the lessons
learnt from several decades of research in deep learning, gradient descent,
backpropagation and neuroscience to biologically plausible spiking neural
neural networks. We also explore the delicate interplay between encoding data
as spikes and the learning process; the challenges and solutions of applying
gradient-based learning to spiking neural networks; the subtle link between
temporal backpropagation and spike timing dependent plasticity, and how deep
learning might move towards biologically plausible online learning. Some ideas
are well accepted and commonly used amongst the neuromorphic engineering
community, while others are presented or justified for the first time here. A
series of companion interactive tutorials complementary to this paper using our
Python package, snnTorch, are also made available:
https://snntorch.readthedocs.io/en/latest/tutorials/index.htm
Training Multi-layer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation
Spiking neural networks (SNNs) have garnered a great amount of interest for
supervised and unsupervised learning applications. This paper deals with the
problem of training multi-layer feedforward SNNs. The non-linear
integrate-and-fire dynamics employed by spiking neurons make it difficult to
train SNNs to generate desired spike trains in response to a given input. To
tackle this, first the problem of training a multi-layer SNN is formulated as
an optimization problem such that its objective function is based on the
deviation in membrane potential rather than the spike arrival instants. Then,
an optimization method named Normalized Approximate Descent (NormAD),
hand-crafted for such non-convex optimization problems, is employed to derive
the iterative synaptic weight update rule. Next, it is reformulated to
efficiently train multi-layer SNNs, and is shown to be effectively performing
spatio-temporal error backpropagation. The learning rule is validated by
training -layer SNNs to solve a spike based formulation of the XOR problem
as well as training -layer SNNs for generic spike based training problems.
Thus, the new algorithm is a key step towards building deep spiking neural
networks capable of efficient event-triggered learning.Comment: 19 pages, 10 figure
- …