292 research outputs found
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding
Precise spike timing as a means to encode information in neural networks is
biologically supported, and is advantageous over frequency-based codes by
processing input features on a much shorter time-scale. For these reasons, much
recent attention has been focused on the development of supervised learning
rules for spiking neural networks that utilise a temporal coding scheme.
However, despite significant progress in this area, there still lack rules that
have a theoretical basis, and yet can be considered biologically relevant. Here
we examine the general conditions under which synaptic plasticity most
effectively takes place to support the supervised learning of a precise
temporal code. As part of our analysis we examine two spike-based learning
methods: one of which relies on an instantaneous error signal to modify
synaptic weights in a network (INST rule), and the other one on a filtered
error signal for smoother synaptic weight modifications (FILT rule). We test
the accuracy of the solutions provided by each rule with respect to their
temporal encoding precision, and then measure the maximum number of input
patterns they can learn to memorise using the precise timings of individual
spikes as an indication of their storage capacity. Our results demonstrate the
high performance of FILT in most cases, underpinned by the rule's
error-filtering mechanism, which is predicted to provide smooth convergence
towards a desired solution during learning. We also find FILT to be most
efficient at performing input pattern memorisations, and most noticeably when
patterns are identified using spikes with sub-millisecond temporal precision.
In comparison with existing work, we determine the performance of FILT to be
consistent with that of the highly efficient E-learning Chronotron, but with
the distinct advantage that FILT is also implementable as an online method for
increased biological realism.Comment: 26 pages, 10 figures, this version is published in PLoS ONE and
incorporates reviewer comment
Neuro-memristive Circuits for Edge Computing: A review
The volume, veracity, variability, and velocity of data produced from the
ever-increasing network of sensors connected to Internet pose challenges for
power management, scalability, and sustainability of cloud computing
infrastructure. Increasing the data processing capability of edge computing
devices at lower power requirements can reduce several overheads for cloud
computing solutions. This paper provides the review of neuromorphic
CMOS-memristive architectures that can be integrated into edge computing
devices. We discuss why the neuromorphic architectures are useful for edge
devices and show the advantages, drawbacks and open problems in the field of
neuro-memristive circuits for edge computing
Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing
Machine learning, particularly in the form of deep learning, has driven most
of the recent fundamental developments in artificial intelligence. Deep
learning is based on computational models that are, to a certain extent,
bio-inspired, as they rely on networks of connected simple computing units
operating in parallel. Deep learning has been successfully applied in areas
such as object/pattern recognition, speech and natural language processing,
self-driving vehicles, intelligent self-diagnostics tools, autonomous robots,
knowledgeable personal assistants, and monitoring. These successes have been
mostly supported by three factors: availability of vast amounts of data,
continuous growth in computing power, and algorithmic innovations. The
approaching demise of Moore's law, and the consequent expected modest
improvements in computing power that can be achieved by scaling, raise the
question of whether the described progress will be slowed or halted due to
hardware limitations. This paper reviews the case for a novel beyond CMOS
hardware technology, memristors, as a potential solution for the implementation
of power-efficient in-memory computing, deep learning accelerators, and spiking
neural networks. Central themes are the reliance on non-von-Neumann computing
architectures and the need for developing tailored learning and inference
algorithms. To argue that lessons from biology can be useful in providing
directions for further progress in artificial intelligence, we briefly discuss
an example based reservoir computing. We conclude the review by speculating on
the big picture view of future neuromorphic and brain-inspired computing
systems.Comment: Keywords: memristor, neuromorphic, AI, deep learning, spiking neural
networks, in-memory computin
Learning without feedback: Fixed random learning signals allow for feedforward training of deep neural networks
While the backpropagation of error algorithm enables deep neural network
training, it implies (i) bidirectional synaptic weight transport and (ii)
update locking until the forward and backward passes are completed. Not only do
these constraints preclude biological plausibility, but they also hinder the
development of low-cost adaptive smart sensors at the edge, as they severely
constrain memory accesses and entail buffering overhead. In this work, we show
that the one-hot-encoded labels provided in supervised classification problems,
denoted as targets, can be viewed as a proxy for the error sign. Therefore,
their fixed random projections enable a layerwise feedforward training of the
hidden layers, thus solving the weight transport and update locking problems
while relaxing the computational and memory requirements. Based on these
observations, we propose the direct random target projection (DRTP) algorithm
and demonstrate that it provides a tradeoff between accuracy and computational
cost that is suitable for adaptive edge computing devices.Comment: This document is the paper as accepted for publication in the
Frontiers in Neuroscience journal, the fully-edited paper is available at
https://www.frontiersin.org/articles/10.3389/fnins.2021.62989
Exact Gradient Computation for Spiking Neural Networks Through Forward Propagation
Spiking neural networks (SNN) have recently emerged as alternatives to
traditional neural networks, owing to energy efficiency benefits and capacity
to better capture biological neuronal mechanisms. However, the classic
backpropagation algorithm for training traditional networks has been
notoriously difficult to apply to SNN due to the hard-thresholding and
discontinuities at spike times. Therefore, a large majority of prior work
believes exact gradients for SNN w.r.t. their weights do not exist and has
focused on approximation methods to produce surrogate gradients. In this paper,
(1) by applying the implicit function theorem to SNN at the discrete spike
times, we prove that, albeit being non-differentiable in time, SNNs have
well-defined gradients w.r.t. their weights, and (2) we propose a novel
training algorithm, called \emph{forward propagation} (FP), that computes exact
gradients for SNN. FP exploits the causality structure between the spikes and
allows us to parallelize computation forward in time. It can be used with other
algorithms that simulate the forward pass, and it also provides insights on why
other related algorithms such as Hebbian learning and also recently-proposed
surrogate gradient methods may perform well
- …