545 research outputs found
Transient Information Flow in a Network of Excitatory and Inhibitory Model Neurons: Role of Noise and Signal Autocorrelation
We investigate the performance of sparsely-connected networks of
integrate-and-fire neurons for ultra-short term information processing. We
exploit the fact that the population activity of networks with balanced
excitation and inhibition can switch from an oscillatory firing regime to a
state of asynchronous irregular firing or quiescence depending on the rate of
external background spikes.
We find that in terms of information buffering the network performs best for
a moderate, non-zero, amount of noise. Analogous to the phenomenon of
stochastic resonance the performance decreases for higher and lower noise
levels. The optimal amount of noise corresponds to the transition zone between
a quiescent state and a regime of stochastic dynamics. This provides a
potential explanation on the role of non-oscillatory population activity in a
simplified model of cortical micro-circuits.Comment: 27 pages, 7 figures, to appear in J. Physiology (Paris) Vol. 9
A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines
Information in neural networks is represented as weighted connections, or
synapses, between neurons. This poses a problem as the primary computational
bottleneck for neural networks is the vector-matrix multiply when inputs are
multiplied by the neural network weights. Conventional processing architectures
are not well suited for simulating neural networks, often requiring large
amounts of energy and time. Additionally, synapses in biological neural
networks are not binary connections, but exhibit a nonlinear response function
as neurotransmitters are emitted and diffuse between neurons. Inspired by
neuroscience principles, we present a digital neuromorphic architecture, the
Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex
synaptic response functions without requiring additional hardware components.
We consider the paradigm of spiking neurons with temporally coded information
as opposed to non-spiking rate coded neurons used in most neural networks. In
this paradigm we examine liquid state machines applied to speech recognition
and show how a liquid state machine with temporal dynamics maps onto the
STPU-demonstrating the flexibility and efficiency of the STPU for instantiating
neural algorithms.Comment: 8 pages, 4 Figures, Preprint of 2017 IJCN
Accelerated physical emulation of Bayesian inference in spiking neural networks
The massively parallel nature of biological information processing plays an
important role for its superiority to human-engineered computing devices. In
particular, it may hold the key to overcoming the von Neumann bottleneck that
limits contemporary computer architectures. Physical-model neuromorphic devices
seek to replicate not only this inherent parallelism, but also aspects of its
microscopic dynamics in analog circuits emulating neurons and synapses.
However, these machines require network models that are not only adept at
solving particular tasks, but that can also cope with the inherent
imperfections of analog substrates. We present a spiking network model that
performs Bayesian inference through sampling on the BrainScaleS neuromorphic
platform, where we use it for generative and discriminative computations on
visual data. By illustrating its functionality on this platform, we implicitly
demonstrate its robustness to various substrate-specific distortive effects, as
well as its accelerated capability for computation. These results showcase the
advantages of brain-inspired physical computation and provide important
building blocks for large-scale neuromorphic applications.Comment: This preprint has been published 2019 November 14. Please cite as:
Kungl A. F. et al. (2019) Accelerated Physical Emulation of Bayesian
Inference in Spiking Neural Networks. Front. Neurosci. 13:1201. doi:
10.3389/fnins.2019.0120
Synthesis of neural networks for spatio-temporal spike pattern recognition and processing
The advent of large scale neural computational platforms has highlighted the
lack of algorithms for synthesis of neural structures to perform predefined
cognitive tasks. The Neural Engineering Framework offers one such synthesis,
but it is most effective for a spike rate representation of neural information,
and it requires a large number of neurons to implement simple functions. We
describe a neural network synthesis method that generates synaptic connectivity
for neurons which process time-encoded neural signals, and which makes very
sparse use of neurons. The method allows the user to specify, arbitrarily,
neuronal characteristics such as axonal and dendritic delays, and synaptic
transfer functions, and then solves for the optimal input-output relationship
using computed dendritic weights. The method may be used for batch or online
learning and has an extremely fast optimization process. We demonstrate its use
in generating a network to recognize speech which is sparsely encoded as spike
times.Comment: In submission to Frontiers in Neuromorphic Engineerin
Towards a learning-theoretic analysis of spike-timing dependent plasticity
This paper suggests a learning-theoretic perspective on how synaptic
plasticity benefits global brain functioning. We introduce a model, the
selectron, that (i) arises as the fast time constant limit of leaky
integrate-and-fire neurons equipped with spiking timing dependent plasticity
(STDP) and (ii) is amenable to theoretical analysis. We show that the selectron
encodes reward estimates into spikes and that an error bound on spikes is
controlled by a spiking margin and the sum of synaptic weights. Moreover, the
efficacy of spikes (their usefulness to other reward maximizing selectrons)
also depends on total synaptic strength. Finally, based on our analysis, we
propose a regularized version of STDP, and show the regularization improves the
robustness of neuronal learning when faced with multiple stimuli.Comment: To appear in Adv. Neural Inf. Proc. System
Training Multi-layer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation
Spiking neural networks (SNNs) have garnered a great amount of interest for
supervised and unsupervised learning applications. This paper deals with the
problem of training multi-layer feedforward SNNs. The non-linear
integrate-and-fire dynamics employed by spiking neurons make it difficult to
train SNNs to generate desired spike trains in response to a given input. To
tackle this, first the problem of training a multi-layer SNN is formulated as
an optimization problem such that its objective function is based on the
deviation in membrane potential rather than the spike arrival instants. Then,
an optimization method named Normalized Approximate Descent (NormAD),
hand-crafted for such non-convex optimization problems, is employed to derive
the iterative synaptic weight update rule. Next, it is reformulated to
efficiently train multi-layer SNNs, and is shown to be effectively performing
spatio-temporal error backpropagation. The learning rule is validated by
training -layer SNNs to solve a spike based formulation of the XOR problem
as well as training -layer SNNs for generic spike based training problems.
Thus, the new algorithm is a key step towards building deep spiking neural
networks capable of efficient event-triggered learning.Comment: 19 pages, 10 figure
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
Neuromorphic Engineering Editors' Pick 2021
This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors
- …