21 research outputs found
RetinaNet Object Detector based on Analog-to-Spiking Neural Network Conversion
The paper proposes a method to convert a deep learning object detector into
an equivalent spiking neural network. The aim is to provide a conversion
framework that is not constrained to shallow network structures and
classification problems as in state-of-the-art conversion libraries. The
results show that models of higher complexity, such as the RetinaNet object
detector, can be converted with limited loss in performance.Comment: 5 pages, submitted to ISCMI 2021 conferenc
Spiking Neural Networks -- Part I: Detecting Spatial Patterns
Spiking Neural Networks (SNNs) are biologically inspired machine learning
models that build on dynamic neuronal models processing binary and sparse
spiking signals in an event-driven, online, fashion. SNNs can be implemented on
neuromorphic computing platforms that are emerging as energy-efficient
co-processors for learning and inference. This is the first of a series of
three papers that introduce SNNs to an audience of engineers by focusing on
models, algorithms, and applications. In this first paper, we first cover
neural models used for conventional Artificial Neural Networks (ANNs) and SNNs.
Then, we review learning algorithms and applications for SNNs that aim at
mimicking the functionality of ANNs by detecting or generating spatial patterns
in rate-encoded spiking signals. We specifically discuss ANN-to-SNN conversion
and neural sampling. Finally, we validate the capabilities of SNNs for
detecting and generating spatial patterns through experiments.Comment: Submitte
BiSNN: Training Spiking Neural Networks with Binary Weights via Bayesian Learning
Artificial Neural Network (ANN)-based inference on battery-powered devices
can be made more energy-efficient by restricting the synaptic weights to be
binary, hence eliminating the need to perform multiplications. An alternative,
emerging, approach relies on the use of Spiking Neural Networks (SNNs),
biologically inspired, dynamic, event-driven models that enhance energy
efficiency via the use of binary, sparse, activations. In this paper, an SNN
model is introduced that combines the benefits of temporally sparse binary
activations and of binary weights. Two learning rules are derived, the first
based on the combination of straight-through and surrogate gradient techniques,
and the second based on a Bayesian paradigm. Experiments validate the
performance loss with respect to full-precision implementations, and
demonstrate the advantage of the Bayesian paradigm in terms of accuracy and
calibration.Comment: Submitte
Asynchronous spiking neurons, the natural key to exploit temporal sparsity
Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsity in natural signals. This paper explains three different types of sparsity and proposes an inference algorithm which exploits all types of sparsities in the execution of already trained networks. Our experiments in three different applications (Handwritten digit recognition, Autonomous Steering and Hand-Gesture recognition) show that this model of inference reduces the number of required operations for sparse input data by a factor of one to two orders of magnitudes. Additionally, due to fully asynchronous processing this type of inference can be run on fully distributed and scalable neuromorphic hardware platforms