24 research outputs found
Spiking Neural Networks -- Part III: Neuromorphic Communications
Synergies between wireless communications and artificial intelligence are
increasingly motivating research at the intersection of the two fields. On the
one hand, the presence of more and more wirelessly connected devices, each with
its own data, is driving efforts to export advances in machine learning (ML)
from high performance computing facilities, where information is stored and
processed in a single location, to distributed, privacy-minded, processing at
the end user. On the other hand, ML can address algorithm and model deficits in
the optimization of communication protocols. However, implementing ML models
for learning and inference on battery-powered devices that are connected via
bandwidth-constrained channels remains challenging. This paper explores two
ways in which Spiking Neural Networks (SNNs) can help address these open
problems. First, we discuss federated learning for the distributed training of
SNNs, and then describe the integration of neuromorphic sensing, SNNs, and
impulse radio technologies for low-power remote inference.Comment: Submitte
Spiking Neural Networks -- Part I: Detecting Spatial Patterns
Spiking Neural Networks (SNNs) are biologically inspired machine learning
models that build on dynamic neuronal models processing binary and sparse
spiking signals in an event-driven, online, fashion. SNNs can be implemented on
neuromorphic computing platforms that are emerging as energy-efficient
co-processors for learning and inference. This is the first of a series of
three papers that introduce SNNs to an audience of engineers by focusing on
models, algorithms, and applications. In this first paper, we first cover
neural models used for conventional Artificial Neural Networks (ANNs) and SNNs.
Then, we review learning algorithms and applications for SNNs that aim at
mimicking the functionality of ANNs by detecting or generating spatial patterns
in rate-encoded spiking signals. We specifically discuss ANN-to-SNN conversion
and neural sampling. Finally, we validate the capabilities of SNNs for
detecting and generating spatial patterns through experiments.Comment: Submitte
BiSNN: Training Spiking Neural Networks with Binary Weights via Bayesian Learning
Artificial Neural Network (ANN)-based inference on battery-powered devices
can be made more energy-efficient by restricting the synaptic weights to be
binary, hence eliminating the need to perform multiplications. An alternative,
emerging, approach relies on the use of Spiking Neural Networks (SNNs),
biologically inspired, dynamic, event-driven models that enhance energy
efficiency via the use of binary, sparse, activations. In this paper, an SNN
model is introduced that combines the benefits of temporally sparse binary
activations and of binary weights. Two learning rules are derived, the first
based on the combination of straight-through and surrogate gradient techniques,
and the second based on a Bayesian paradigm. Experiments validate the
performance loss with respect to full-precision implementations, and
demonstrate the advantage of the Bayesian paradigm in terms of accuracy and
calibration.Comment: Submitte
Bayesian Continual Learning via Spiking Neural Networks
Among the main features of biological intelligence are energy efficiency,
capacity for continual adaptation, and risk management via uncertainty
quantification. Neuromorphic engineering has been thus far mostly driven by the
goal of implementing energy-efficient machines that take inspiration from the
time-based computing paradigm of biological brains. In this paper, we take
steps towards the design of neuromorphic systems that are capable of adaptation
to changing learning tasks, while producing well-calibrated uncertainty
quantification estimates. To this end, we derive online learning rules for
spiking neural networks (SNNs) within a Bayesian continual learning framework.
In it, each synaptic weight is represented by parameters that quantify the
current epistemic uncertainty resulting from prior knowledge and observed data.
The proposed online rules update the distribution parameters in a streaming
fashion as data are observed. We instantiate the proposed approach for both
real-valued and binary synaptic weights. Experimental results using Intel's
Lava platform show the merits of Bayesian over frequentist learning in terms of
capacity for adaptation and uncertainty quantification.Comment: Accepted for publication in Frontiers in Computational Neuroscienc
Bayesian Inference on Binary Spiking Networks Leveraging Nanoscale Device Stochasticity
Bayesian Neural Networks (BNNs) can overcome the problem of overconfidence
that plagues traditional frequentist deep neural networks, and are hence
considered to be a key enabler for reliable AI systems. However, conventional
hardware realizations of BNNs are resource intensive, requiring the
implementation of random number generators for synaptic sampling. Owing to
their inherent stochasticity during programming and read operations, nanoscale
memristive devices can be directly leveraged for sampling, without the need for
additional hardware resources. In this paper, we introduce a novel Phase Change
Memory (PCM)-based hardware implementation for BNNs with binary synapses. The
proposed architecture consists of separate weight and noise planes, in which
PCM cells are configured and operated to represent the nominal values of
weights and to generate the required noise for sampling, respectively. Using
experimentally observed PCM noise characteristics, for the exemplary Breast
Cancer Dataset classification problem, we obtain hardware accuracy and expected
calibration error matching that of an 8-bit fixed-point (FxP8) implementation,
with projected savings of over 9 in terms of core area transistor
count.Comment: Submitted and Accepted in ISCAS 202