1,676 research outputs found
Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity
Spike-timing-dependent plasticity (STDP) incurs both causal and acausal
synaptic weight updates, for negative and positive time differences between
pre-synaptic and post-synaptic spike events. For realizing such updates in
neuromorphic hardware, current implementations either require forward and
reverse lookup access to the synaptic connectivity table, or rely on
memory-intensive architectures such as crossbar arrays. We present a novel
method for realizing both causal and acausal weight updates using only forward
lookup access of the synaptic connectivity table, permitting memory-efficient
implementation. A simplified implementation in FPGA, using a single timer
variable for each neuron, closely approximates exact STDP cumulative weight
updates for neuron refractory periods greater than 10 ms, and reduces to exact
STDP for refractory periods greater than the STDP time window. Compared to
conventional crossbar implementation, the forward table-based implementation
leads to substantial memory savings for sparsely connected networks supporting
scalable neuromorphic systems with fully reconfigurable synaptic connectivity
and plasticity.Comment: Submitted to BioCAS 201
A geographically distributed bio-hybrid neural network with memristive plasticity
Throughout evolution the brain has mastered the art of processing real-world
inputs through networks of interlinked spiking neurons. Synapses have emerged
as key elements that, owing to their plasticity, are merging neuron-to-neuron
signalling with memory storage and computation. Electronics has made important
steps in emulating neurons through neuromorphic circuits and synapses with
nanoscale memristors, yet novel applications that interlink them in
heterogeneous bio-inspired and bio-hybrid architectures are just beginning to
materialise. The use of memristive technologies in brain-inspired architectures
for computing or for sensing spiking activity of biological neurons8 are only
recent examples, however interlinking brain and electronic neurons through
plasticity-driven synaptic elements has remained so far in the realm of the
imagination. Here, we demonstrate a bio-hybrid neural network (bNN) where
memristors work as "synaptors" between rat neural circuits and VLSI neurons.
The two fundamental synaptors, from artificial-to-biological (ABsyn) and from
biological-to- artificial (BAsyn), are interconnected over the Internet. The
bNN extends across Europe, collapsing spatial boundaries existing in natural
brain networks and laying the foundations of a new geographically distributed
and evolving architecture: the Internet of Neuro-electronics (IoN).Comment: 16 pages, 10 figure
Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals
Spiking neural networks (SNNs) enable power-efficient implementations due to
their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN
that uses unsupervised learning to extract discriminative features from speech
signals, which can subsequently be used in a classifier. The architecture
consists of a spiking convolutional/pooling layer followed by a fully connected
spiking layer for feature discovery. The convolutional layer of leaky,
integrate-and-fire (LIF) neurons represents primary acoustic features. The
fully connected layer is equipped with a probabilistic spike-timing-dependent
plasticity learning rule. This layer represents the discriminative features
through probabilistic, LIF neurons. To assess the discriminative power of the
learned features, they are used in a hidden Markov model (HMM) for spoken digit
recognition. The experimental results show performance above 96% that compares
favorably with popular statistical feature extraction methods. Our results
provide a novel demonstration of unsupervised feature acquisition in an SNN
SuperSpike: Supervised learning in multi-layer spiking neural networks
A vast majority of computation in the brain is performed by spiking neural
networks. Despite the ubiquity of such spiking, we currently lack an
understanding of how biological spiking neural circuits learn and compute
in-vivo, as well as how we can instantiate such capabilities in artificial
spiking circuits in-silico. Here we revisit the problem of supervised learning
in temporally coding multi-layer spiking neural networks. First, by using a
surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based
three factor learning rule capable of training multi-layer networks of
deterministic integrate-and-fire neurons to perform nonlinear computations on
spatiotemporal spike patterns. Second, inspired by recent results on feedback
alignment, we compare the performance of our learning rule under different
credit assignment strategies for propagating output errors to hidden units.
Specifically, we test uniform, symmetric and random feedback, finding that
simpler tasks can be solved with any type of feedback, while more complex tasks
require symmetric feedback. In summary, our results open the door to obtaining
a better scientific understanding of learning and computation in spiking neural
networks by advancing our ability to train them to solve nonlinear problems
involving transformations between different spatiotemporal spike-time patterns
Recommended from our members
A Heterosynaptic Spiking Neural System for the Development of Autonomous Agents
Artificial neural systems for computation were first proposed three quarters of a century ago and the concepts developed by the pioneers still shape the field today. The first generation of neural systems was developed in the nineteen forties in the context of analogue electronics and the theoretical research in logic and mathematics that led to the first digital computers in nineteen forties and fifties. The second generation of neural systems implemented on digital computers was born in the nineteen fifties and great progress was made in the subsequent half century with neural networks being applied to many problems in pattern recognition and machine learning. Through this history there has been an interplay between biologically inspired neural systems and their implementation by engineers on digital machines. This thesis concerns the third generation of neural networks, Spiking Neural Networks, which is making possible the creation of new kinds of brain inspired computing architectures that offer the potential to increase the level of realism and sophistication in terms of autonomous machine behaviour and cognitive computing. This thesis presents the development and demonstration of a new theoretical architecture for third generation neural systems, the Integrate-and-Fire based Spiking Neural Model with extended Neuro-modulated Spike Timing Dependent Plasticity capabilities. This proposed architecture overcomes the limitation of the homosynaptic architecture underlying existing implementations of spiking neural networks that it lacks a natural spike timing dependent plasticity regulation mechanism, and this results in ‘run away’ dynamics. To overcome this ad hoc procedures have been implemented to overcome the ‘run away’ dynamics that emerge from the use of spike timing dependent plasticity among other hebbian-based plasticity rules. The new heterosynaptic architecture presented, explicitly abstracts the modulation of complex biochemical mechanisms into a simplified mechanism that is suitable for the engineering of artificial systems with low computational complexity. Neurons work by receiving input signals from other neurons through synapses. The difference between homosynaptic and heterosynaptic plasticity is that, in the former the change in the properties of a synapse (e.g. synaptic efficacy) depends on the point to point activity in either of the sending and receiving neurons, in contrast for heterosynaptic plasticity the change in the properties of a synapse can be elicited by neurons that are not necessary presynaptic or postsynaptic to the synapse in question. The new architecture is tested by a number of implementations in simulated and real environments. This includes experiments with a simulation environment implemented in Netlogo, and an implementation using Lego Mindstorms as the physical robot platform. These experiments demonstrate the problems with the traditional Spike timing dependent plasticity homosynaptic architecture and how the new heterosynaptic approach can overcome them. It is concluded that the new theoretical architecture provides a natural, theoretically sound, and practical new direction for research into the role of modulatory neural systems applied to spiking neural networks
Maturation of GABAergic Inhibition Promotes Strengthening of Temporally Coherent Inputs among Convergent Pathways
Spike-timing-dependent plasticity (STDP), a form of Hebbian plasticity, is inherently stabilizing. Whether and how GABAergic inhibition influences STDP is not well understood. Using a model neuron driven by converging inputs modifiable by STDP, we determined that a sufficient level of inhibition was critical to ensure that temporal coherence (correlation among presynaptic spike times) of synaptic inputs, rather than initial strength or number of inputs within a pathway, controlled postsynaptic spike timing. Inhibition exerted this effect by preferentially reducing synaptic efficacy, the ability of inputs to evoke postsynaptic action potentials, of the less coherent inputs. In visual cortical slices, inhibition potently reduced synaptic efficacy at ages during but not before the critical period of ocular dominance (OD) plasticity. Whole-cell recordings revealed that the amplitude of unitary IPSCs from parvalbumin positive (Pv+) interneurons to pyramidal neurons increased during the critical period, while the synaptic decay time-constant decreased. In addition, intrinsic properties of Pv+ interneurons matured, resulting in an increase in instantaneous firing rate. Our results suggest that maturation of inhibition in visual cortex ensures that the temporally coherent inputs (e.g. those from the open eye during monocular deprivation) control postsynaptic spike times of binocular neurons, a prerequisite for Hebbian mechanisms to induce OD plasticity
Paradoxical Results of Long-Term Potentiation explained by Voltage-based Plasticity Rule
Experiments have shown that the same stimulation pattern that causes
Long-Term Potentiation in proximal synapses, will induce Long-Term Depression
in distal ones. In order to understand these, and other, surprising
observations we use a phenomenological model of Hebbian plasticity at the
location of the synapse. Our computational model describes the Hebbian
condition of joint activity of pre- and post-synaptic neuron in a compact form
as the interaction of the glutamate trace left by a presynaptic spike with the
time course of the postsynaptic voltage. We test the model using experimentally
recorded dendritic voltage traces in hippocampus and neocortex. We find that
the time course of the voltage in the neighborhood of a stimulated synapse is a
reliable predictor of whether a stimulated synapse undergoes potentiation,
depression, or no change. Our model can explain the existence of different --
at first glance seemingly paradoxical -- outcomes of synaptic potentiation and
depression experiments depending on the dendritic location of the synapse and
the frequency or timing of the stimulation
Constructive spiking neural networks for simulations of neuroplasticity
Artificial neural networks are important tools in machine learning and neuroscience;
however, a difficult step in their implementation is the selection of the neural network size and
structure. This thesis develops fundamental theory on algorithms for constructing neurons in
spiking neural networks and simulations of neuroplasticity. This theory is applied in the
development of a constructive algorithm based on spike-timing- dependent plasticity (STDP) that
achieves continual one-shot learning of hidden spike patterns through neuron construction.
The theoretical developments in this thesis begin with the proposal of a set of definitions of
the fundamental components of constructive neural networks. Disagreement in terminology across the
literature and a lack of clear definitions and requirements for constructive neural networks is a
factor in the poor visibility and fragmentation of research. The proposed definitions are used as
the basis for a generalised methodology for decomposing constructive neural networks into
components to perform comparisons, design and analysis.
Spiking neuron models are uncommon in constructive neural network literature; however, spiking
neurons are common in simulated studies in neuroscience. Spike- timing-dependent construction is
proposed as a distinct class of constructive algorithm for spiking neural networks. Past algorithms
that perform spike-timing-dependent construction are decomposed into defined components for a
detailed critical comparison and found to have limited applicability in simulations of biological
neural networks.
This thesis develops concepts and principles for designing constructive algorithms that are
compatible with simulations of biological neural networks. Simulations often have orders of
magnitude fewer neurons than related biological neural systems; there- fore, the neurons in a
simulation may be assumed to be a selection or subset of a larger neural system with many neurons
not simulated. Neuron construction and pruning may therefore be reinterpreted as the transfer of
neurons between sets of simulated neurons and hypothetical neurons in the neural system.
Constructive algorithms with a functional equivalence to transferring neurons between sets allow
simulated neural networks to maintain biological plausibility while changing size.
The components of a novel constructive algorithm are incrementally developed from the principles
for biological plausibility. First, processes for calculating new synapse weights from observed
simulation activity and estimates of past STDP are developed and analysed. Second, a method for
predicting postsynaptic spike times for synapse weight calculations through the simulation of a proxy for hypothetical neurons is developed. Finally, spike-dependent conditions for neuron construction and pruning are developed and
the processes are combined in a constructive algorithm for simulations of STDP.
Repeating hidden spike patterns can be detected by neurons tuned through STDP; this result is
reproduced in STDP simulations with neuron construction. Tuned neurons become unresponsive to other
activity, preventing detuning but also preventing neurons from learning new spike patterns.
Continual learning is demonstrated through neuron construction with immediate detection of new
spike patterns from one-shot predictions of STDP convergence.
Future research may investigate applications of the developed constructive algorithm in
neuroscience and machine learning. The developed theory on constructive neural networks and
concepts of selective simulation of neurons also provide new directions for future research.Thesis (Ph.D.) -- University of Adelaide, School of Mechanical Engineering, 201
- …