67,758 research outputs found
Supervised and unsupervised weight and delay adaptation learning in temporal coding spiking neural networks
Artificial neural networks are learning paradigms which mimic the biological neural system. The temporal coding Spiking Neural Network, a relatively new artificial neural network paradigm, is considered to be computationally more powerful than the conventional neural network. Research on the network of spiking neurons is an emerging field and has potential for wider investigation. This research explores alternative learning models with temporal coding spiking neural networks for clustering and classification tasks. Neurons are known to be operating in two modes namely, as integrators and coincidence detectors. Previous temporal coding spiking neural networks, realising spiking neurons as integrators, were utilised for analytical studies. Temporal coding spiking neural networks applied successfully for clustering and classification tasks realised spiking neurons as coincidence detectors and encoded input in formation in the connection delays through a weight adaptation technique. These learning models select suitably delayed connections by enhancing the weights of those connections while weakening the others. This research investigates the learning in temporal coding spiking neural networks with spiking neurons as integrators and coincidence detectors. Focus is given to both supervised and unsupervised learning through weight as well as through delay adaptation. Three novel models for learning in temporal coding spiking neural networks are presented in this research. The first spiking neural network model, Self- Organising Weight Adaptation Spiking Neural Network (SOWA_SNN) realises the spiking neuron as integrator. This model adapts and encodes input information in its connection weights. The second learning model, Self-Organising Delay Adaptation Spiking Neural Network (SODA_SNN) and the third model, Super vised Delay Adaptation Spiking Neural Network (SDA_SNN) realise the spiking neuron as coincidence detector. These two models adapt the connection delays in order to detect temporal patterns through coincidence detection. The first two models were developed for clustering applications and the third for classification tasks. All three models employ Hebbian-based learning rules to update the network connection parameters by utilising the difference between the input and output spike times. The proposed temporal coding spiking neural network models were implemented as discrete models in software and their characteristics and capabilities were analysed through simulations on three bench mark data sets and a high dimensional data set. All three models were able to cluster or classify the analysed data sets efficiently with a high degree of accuracy. The performance of the proposed models, was found to be better than the existing spiking neural network models as well as conventional neural networks. The proposed learning paradigms could be applied to a wide range of applications including manufacturing, business and biomedical domains.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Supervised and unsupervised weight and delay adaptation learning in temporal coding spiking neural networks
Artificial neural networks are learning paradigms which mimic the biological neural system. The temporal coding Spiking Neural Network, a relatively new artificial neural network paradigm, is considered to be computationally more powerful than the conventional neural network. Research on the network of spiking neurons is an emerging field and has potential for wider investigation. This research explores alternative learning models with temporal coding spiking neural networks for clustering and classification tasks. Neurons are known to be operating in two modes namely, as integrators and coincidence detectors. Previous temporal coding spiking neural networks, realising spiking neurons as integrators, were utilised for analytical studies. Temporal coding spiking neural networks applied successfully for clustering and classification tasks realised spiking neurons as coincidence detectors and encoded input in formation in the connection delays through a weight adaptation technique. These learning models select suitably delayed connections by enhancing the weights of those connections while weakening the others. This research investigates the learning in temporal coding spiking neural networks with spiking neurons as integrators and coincidence detectors. Focus is given to both supervised and unsupervised learning through weight as well as through delay adaptation. Three novel models for learning in temporal coding spiking neural networks are presented in this research. The first spiking neural network model, Self- Organising Weight Adaptation Spiking Neural Network (SOWA_SNN) realises the spiking neuron as integrator. This model adapts and encodes input information in its connection weights. The second learning model, Self-Organising Delay Adaptation Spiking Neural Network (SODA_SNN) and the third model, Super vised Delay Adaptation Spiking Neural Network (SDA_SNN) realise the spiking neuron as coincidence detector. These two models adapt the connection delays in order to detect temporal patterns through coincidence detection. The first two models were developed for clustering applications and the third for classification tasks. All three models employ Hebbian-based learning rules to update the network connection parameters by utilising the difference between the input and output spike times. The proposed temporal coding spiking neural network models were implemented as discrete models in software and their characteristics and capabilities were analysed through simulations on three bench mark data sets and a high dimensional data set. All three models were able to cluster or classify the analysed data sets efficiently with a high degree of accuracy. The performance of the proposed models, was found to be better than the existing spiking neural network models as well as conventional neural networks. The proposed learning paradigms could be applied to a wide range of applications including manufacturing, business and biomedical domains
A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines
Information in neural networks is represented as weighted connections, or
synapses, between neurons. This poses a problem as the primary computational
bottleneck for neural networks is the vector-matrix multiply when inputs are
multiplied by the neural network weights. Conventional processing architectures
are not well suited for simulating neural networks, often requiring large
amounts of energy and time. Additionally, synapses in biological neural
networks are not binary connections, but exhibit a nonlinear response function
as neurotransmitters are emitted and diffuse between neurons. Inspired by
neuroscience principles, we present a digital neuromorphic architecture, the
Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex
synaptic response functions without requiring additional hardware components.
We consider the paradigm of spiking neurons with temporally coded information
as opposed to non-spiking rate coded neurons used in most neural networks. In
this paradigm we examine liquid state machines applied to speech recognition
and show how a liquid state machine with temporal dynamics maps onto the
STPU-demonstrating the flexibility and efficiency of the STPU for instantiating
neural algorithms.Comment: 8 pages, 4 Figures, Preprint of 2017 IJCN
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding
Precise spike timing as a means to encode information in neural networks is
biologically supported, and is advantageous over frequency-based codes by
processing input features on a much shorter time-scale. For these reasons, much
recent attention has been focused on the development of supervised learning
rules for spiking neural networks that utilise a temporal coding scheme.
However, despite significant progress in this area, there still lack rules that
have a theoretical basis, and yet can be considered biologically relevant. Here
we examine the general conditions under which synaptic plasticity most
effectively takes place to support the supervised learning of a precise
temporal code. As part of our analysis we examine two spike-based learning
methods: one of which relies on an instantaneous error signal to modify
synaptic weights in a network (INST rule), and the other one on a filtered
error signal for smoother synaptic weight modifications (FILT rule). We test
the accuracy of the solutions provided by each rule with respect to their
temporal encoding precision, and then measure the maximum number of input
patterns they can learn to memorise using the precise timings of individual
spikes as an indication of their storage capacity. Our results demonstrate the
high performance of FILT in most cases, underpinned by the rule's
error-filtering mechanism, which is predicted to provide smooth convergence
towards a desired solution during learning. We also find FILT to be most
efficient at performing input pattern memorisations, and most noticeably when
patterns are identified using spikes with sub-millisecond temporal precision.
In comparison with existing work, we determine the performance of FILT to be
consistent with that of the highly efficient E-learning Chronotron, but with
the distinct advantage that FILT is also implementable as an online method for
increased biological realism.Comment: 26 pages, 10 figures, this version is published in PLoS ONE and
incorporates reviewer comment
A neural circuit for navigation inspired by C. elegans Chemotaxis
We develop an artificial neural circuit for contour tracking and navigation
inspired by the chemotaxis of the nematode Caenorhabditis elegans. In order to
harness the computational advantages spiking neural networks promise over their
non-spiking counterparts, we develop a network comprising 7-spiking neurons
with non-plastic synapses which we show is extremely robust in tracking a range
of concentrations. Our worm uses information regarding local temporal gradients
in sodium chloride concentration to decide the instantaneous path for foraging,
exploration and tracking. A key neuron pair in the C. elegans chemotaxis
network is the ASEL & ASER neuron pair, which capture the gradient of
concentration sensed by the worm in their graded membrane potentials. The
primary sensory neurons for our network are a pair of artificial spiking
neurons that function as gradient detectors whose design is adapted from a
computational model of the ASE neuron pair in C. elegans. Simulations show that
our worm is able to detect the set-point with approximately four times higher
probability than the optimal memoryless Levy foraging model. We also show that
our spiking neural network is much more efficient and noise-resilient while
navigating and tracking a contour, as compared to an equivalent non-spiking
network. We demonstrate that our model is extremely robust to noise and with
slight modifications can be used for other practical applications such as
obstacle avoidance. Our network model could also be extended for use in
three-dimensional contour tracking or obstacle avoidance
Neuromorphic Learning towards Nano Second Precision
Temporal coding is one approach to representing information in spiking neural
networks. An example of its application is the location of sounds by barn owls
that requires especially precise temporal coding. Dependent upon the azimuthal
angle, the arrival times of sound signals are shifted between both ears. In
order to deter- mine these interaural time differences, the phase difference of
the signals is measured. We implemented this biologically inspired network on a
neuromorphic hardware system and demonstrate spike-timing dependent plasticity
on an analog, highly accelerated hardware substrate. Our neuromorphic
implementation enables the resolution of time differences of less than 50 ns.
On-chip Hebbian learning mechanisms select inputs from a pool of neurons which
code for the same sound frequency. Hence, noise caused by different synaptic
delays across these inputs is reduced. Furthermore, learning compensates for
variations on neuronal and synaptic parameters caused by device mismatch
intrinsic to the neuromorphic substrate.Comment: 7 pages, 7 figures, presented at IJCNN 2013 in Dallas, TX, USA. IJCNN
2013. Corrected version with updated STDP curves IJCNN 201
Neuro-memristive Circuits for Edge Computing: A review
The volume, veracity, variability, and velocity of data produced from the
ever-increasing network of sensors connected to Internet pose challenges for
power management, scalability, and sustainability of cloud computing
infrastructure. Increasing the data processing capability of edge computing
devices at lower power requirements can reduce several overheads for cloud
computing solutions. This paper provides the review of neuromorphic
CMOS-memristive architectures that can be integrated into edge computing
devices. We discuss why the neuromorphic architectures are useful for edge
devices and show the advantages, drawbacks and open problems in the field of
neuro-memristive circuits for edge computing
- …