1,455 research outputs found
Optimizing the energy consumption of spiking neural networks for neuromorphic applications
In the last few years, spiking neural networks have been demonstrated to
perform on par with regular convolutional neural networks. Several works have
proposed methods to convert a pre-trained CNN to a Spiking CNN without a
significant sacrifice of performance. We demonstrate first that
quantization-aware training of CNNs leads to better accuracy in SNNs. One of
the benefits of converting CNNs to spiking CNNs is to leverage the sparse
computation of SNNs and consequently perform equivalent computation at a lower
energy consumption. Here we propose an efficient optimization strategy to train
spiking networks at lower energy consumption, while maintaining similar
accuracy levels. We demonstrate results on the MNIST-DVS and CIFAR-10 datasets
Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection
Over the past decade, deep neural networks (DNNs) have demonstrated
remarkable performance in a variety of applications. As we try to solve more
advanced problems, increasing demands for computing and power resources has
become inevitable. Spiking neural networks (SNNs) have attracted widespread
interest as the third-generation of neural networks due to their event-driven
and low-powered nature. SNNs, however, are difficult to train, mainly owing to
their complex dynamics of neurons and non-differentiable spike operations.
Furthermore, their applications have been limited to relatively simple tasks
such as image classification. In this study, we investigate the performance
degradation of SNNs in a more challenging regression problem (i.e., object
detection). Through our in-depth analysis, we introduce two novel methods:
channel-wise normalization and signed neuron with imbalanced threshold, both of
which provide fast and accurate information transmission for deep SNNs.
Consequently, we present a first spiked-based object detection model, called
Spiking-YOLO. Our experiments show that Spiking-YOLO achieves remarkable
results that are comparable (up to 98%) to those of Tiny YOLO on non-trivial
datasets, PASCAL VOC and MS COCO. Furthermore, Spiking-YOLO on a neuromorphic
chip consumes approximately 280 times less energy than Tiny YOLO and converges
2.3 to 4 times faster than previous SNN conversion methods.Comment: Accepted to AAAI 202
Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals
Spiking neural networks (SNNs) enable power-efficient implementations due to
their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN
that uses unsupervised learning to extract discriminative features from speech
signals, which can subsequently be used in a classifier. The architecture
consists of a spiking convolutional/pooling layer followed by a fully connected
spiking layer for feature discovery. The convolutional layer of leaky,
integrate-and-fire (LIF) neurons represents primary acoustic features. The
fully connected layer is equipped with a probabilistic spike-timing-dependent
plasticity learning rule. This layer represents the discriminative features
through probabilistic, LIF neurons. To assess the discriminative power of the
learned features, they are used in a hidden Markov model (HMM) for spoken digit
recognition. The experimental results show performance above 96% that compares
favorably with popular statistical feature extraction methods. Our results
provide a novel demonstration of unsupervised feature acquisition in an SNN
Efficient Computation in Adaptive Artificial Spiking Neural Networks
Artificial Neural Networks (ANNs) are bio-inspired models of neural
computation that have proven highly effective. Still, ANNs lack a natural
notion of time, and neural units in ANNs exchange analog values in a
frame-based manner, a computationally and energetically inefficient form of
communication. This contrasts sharply with biological neurons that communicate
sparingly and efficiently using binary spikes. While artificial Spiking Neural
Networks (SNNs) can be constructed by replacing the units of an ANN with
spiking neurons, the current performance is far from that of deep ANNs on hard
benchmarks and these SNNs use much higher firing rates compared to their
biological counterparts, limiting their efficiency. Here we show how spiking
neurons that employ an efficient form of neural coding can be used to construct
SNNs that match high-performance ANNs and exceed state-of-the-art in SNNs on
important benchmarks, while requiring much lower average firing rates. For
this, we use spike-time coding based on the firing rate limiting adaptation
phenomenon observed in biological spiking neurons. This phenomenon can be
captured in adapting spiking neuron models, for which we derive the effective
transfer function. Neural units in ANNs trained with this transfer function can
be substituted directly with adaptive spiking neurons, and the resulting
Adaptive SNNs (AdSNNs) can carry out inference in deep neural networks using up
to an order of magnitude fewer spikes compared to previous SNNs. Adaptive
spike-time coding additionally allows for the dynamic control of neural coding
precision: we show how a simple model of arousal in AdSNNs further halves the
average required firing rate and this notion naturally extends to other forms
of attention. AdSNNs thus hold promise as a novel and efficient model for
neural computation that naturally fits to temporally continuous and
asynchronous applications
Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations
We present a new back propagation based training algorithm for discrete-time
spiking neural networks (SNN). Inspired by recent deep learning algorithms on
binarized neural networks, binary activation with a straight-through gradient
estimator is used to model the leaky integrate-fire spiking neuron, overcoming
the difficulty in training SNNs using back propagation. Two SNN training
algorithms are proposed: (1) SNN with discontinuous integration, which is
suitable for rate-coded input spikes, and (2) SNN with continuous integration,
which is more general and can handle input spikes with temporal information.
Neuromorphic hardware designed in 40nm CMOS exploits the spike sparsity and
demonstrates high classification accuracy (>98% on MNIST) and low energy
(48.4-773 nJ/image).Comment: 2017 IEEE Biomedical Circuits and Systems (BioCAS
Neuro-memristive Circuits for Edge Computing: A review
The volume, veracity, variability, and velocity of data produced from the
ever-increasing network of sensors connected to Internet pose challenges for
power management, scalability, and sustainability of cloud computing
infrastructure. Increasing the data processing capability of edge computing
devices at lower power requirements can reduce several overheads for cloud
computing solutions. This paper provides the review of neuromorphic
CMOS-memristive architectures that can be integrated into edge computing
devices. We discuss why the neuromorphic architectures are useful for edge
devices and show the advantages, drawbacks and open problems in the field of
neuro-memristive circuits for edge computing
Multi-layered Spiking Neural Network with Target Timestamp Threshold Adaptation and STDP
Spiking neural networks (SNNs) are good candidates to produce
ultra-energy-efficient hardware. However, the performance of these models is
currently behind traditional methods. Introducing multi-layered SNNs is a
promising way to reduce this gap. We propose in this paper a new threshold
adaptation system which uses a timestamp objective at which neurons should
fire. We show that our method leads to state-of-the-art classification rates on
the MNIST dataset (98.60%) and the Faces/Motorbikes dataset (99.46%) with an
unsupervised SNN followed by a linear SVM. We also investigate the sparsity
level of the network by testing different inhibition policies and STDP rules
Learning to Recognize Actions from Limited Training Examples Using a Recurrent Spiking Neural Model
A fundamental challenge in machine learning today is to build a model that
can learn from few examples. Here, we describe a reservoir based spiking neural
model for learning to recognize actions with a limited number of labeled
videos. First, we propose a novel encoding, inspired by how microsaccades
influence visual perception, to extract spike information from raw video data
while preserving the temporal correlation across different frames. Using this
encoding, we show that the reservoir generalizes its rich dynamical activity
toward signature action/movements enabling it to learn from few training
examples. We evaluate our approach on the UCF-101 dataset. Our experiments
demonstrate that our proposed reservoir achieves 81.3%/87% Top-1/Top-5
accuracy, respectively, on the 101-class data while requiring just 8 video
examples per class for training. Our results establish a new benchmark for
action recognition from limited video examples for spiking neural models while
yielding competetive accuracy with respect to state-of-the-art non-spiking
neural models.Comment: 13 figures (includes supplementary information
- …