26 research outputs found
Fractionally Predictive Spiking Neurons
Recent experimental work has suggested that the neural firing rate can be
interpreted as a fractional derivative, at least when signal variation induces
neural adaptation. Here, we show that the actual neural spike-train itself can
be considered as the fractional derivative, provided that the neural signal is
approximated by a sum of power-law kernels. A simple standard thresholding
spiking neuron suffices to carry out such an approximation, given a suitable
refractory response. Empirically, we find that the online approximation of
signals with a sum of power-law kernels is beneficial for encoding signals with
slowly varying components, like long-memory self-similar signals. For such
signals, the online power-law kernel approximation typically required less than
half the number of spikes for similar SNR as compared to sums of similar but
exponentially decaying kernels. As power-law kernels can be accurately
approximated using sums or cascades of weighted exponentials, we demonstrate
that the corresponding decoding of spike-trains by a receiving neuron allows
for natural and transparent temporal signal filtering by tuning the weights of
the decoding kernel.Comment: 13 pages, 5 figures, in Advances in Neural Information Processing
201
SuperSpike: Supervised learning in multi-layer spiking neural networks
A vast majority of computation in the brain is performed by spiking neural
networks. Despite the ubiquity of such spiking, we currently lack an
understanding of how biological spiking neural circuits learn and compute
in-vivo, as well as how we can instantiate such capabilities in artificial
spiking circuits in-silico. Here we revisit the problem of supervised learning
in temporally coding multi-layer spiking neural networks. First, by using a
surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based
three factor learning rule capable of training multi-layer networks of
deterministic integrate-and-fire neurons to perform nonlinear computations on
spatiotemporal spike patterns. Second, inspired by recent results on feedback
alignment, we compare the performance of our learning rule under different
credit assignment strategies for propagating output errors to hidden units.
Specifically, we test uniform, symmetric and random feedback, finding that
simpler tasks can be solved with any type of feedback, while more complex tasks
require symmetric feedback. In summary, our results open the door to obtaining
a better scientific understanding of learning and computation in spiking neural
networks by advancing our ability to train them to solve nonlinear problems
involving transformations between different spatiotemporal spike-time patterns
Spiking PointNet: Spiking Neural Networks for Point Clouds
Recently, Spiking Neural Networks (SNNs), enjoying extreme energy efficiency,
have drawn much research attention on 2D visual recognition and shown gradually
increasing application potential. However, it still remains underexplored
whether SNNs can be generalized to 3D recognition. To this end, we present
Spiking PointNet in the paper, the first spiking neural model for efficient
deep learning on point clouds. We discover that the two huge obstacles limiting
the application of SNNs in point clouds are: the intrinsic optimization
obstacle of SNNs that impedes the training of a big spiking model with large
time steps, and the expensive memory and computation cost of PointNet that
makes training a big spiking point model unrealistic. To solve the problems
simultaneously, we present a trained-less but learning-more paradigm for
Spiking PointNet with theoretical justifications and in-depth experimental
analysis. In specific, our Spiking PointNet is trained with only a single time
step but can obtain better performance with multiple time steps inference,
compared to the one trained directly with multiple time steps. We conduct
various experiments on ModelNet10, ModelNet40 to demonstrate the effectiveness
of Spiking PointNet. Notably, our Spiking PointNet even can outperform its ANN
counterpart, which is rare in the SNN field thus providing a potential research
direction for the following work. Moreover, Spiking PointNet shows impressive
speedup and storage saving in the training phase.Comment: Accepted by NeurIP
Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks
The emergence of brain-inspired neuromorphic computing as a paradigm for edge
AI is motivating the search for high-performance and efficient spiking neural
networks to run on this hardware. However, compared to classical neural
networks in deep learning, current spiking neural networks lack competitive
performance in compelling areas. Here, for sequential and streaming tasks, we
demonstrate how a novel type of adaptive spiking recurrent neural network
(SRNN) is able to achieve state-of-the-art performance compared to other
spiking neural networks and almost reach or exceed the performance of classical
recurrent neural networks (RNNs) while exhibiting sparse activity. From this,
we calculate a 100x energy improvement for our SRNNs over classical RNNs on
the harder tasks. To achieve this, we model standard and adaptive
multiple-timescale spiking neurons as self-recurrent neural units, and leverage
surrogate gradients and auto-differentiation in the PyTorch Deep Learning
framework to efficiently implement backpropagation-through-time, including
learning of the important spiking neuron parameters to adapt our spiking
neurons to the tasks.Comment: 11 pages,5 figure
Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks
Biological neurons communicate with a sparing exchange of pulses - spikes. It
is an open question how real spiking neurons produce the kind of powerful
neural computation that is possible with deep artificial neural networks, using
only so very few spikes to communicate. Building on recent insights in
neuroscience, we present an Adapting Spiking Neural Network (ASNN) based on
adaptive spiking neurons. These spiking neurons efficiently encode information
in spike-trains using a form of Asynchronous Pulsed Sigma-Delta coding while
homeostatically optimizing their firing rate. In the proposed paradigm of
spiking neuron computation, neural adaptation is tightly coupled to synaptic
plasticity, to ensure that downstream neurons can correctly decode upstream
spiking neurons. We show that this type of network is inherently able to carry
out asynchronous and event-driven neural computation, while performing
identical to corresponding artificial neural networks (ANNs). In particular, we
show that these adaptive spiking neurons can be drop in replacements for ReLU
neurons in standard feedforward ANNs comprised of such units. We demonstrate
that this can also be successfully applied to a ReLU based deep convolutional
neural network for classifying the MNIST dataset. The ASNN thus outperforms
current Spiking Neural Networks (SNNs) implementations, while responding (up
to) an order of magnitude faster and using an order of magnitude fewer spikes.
Additionally, in a streaming setting where frames are continuously classified,
we show that the ASNN requires substantially fewer network updates as compared
to the corresponding ANN