769 research outputs found
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding
Precise spike timing as a means to encode information in neural networks is
biologically supported, and is advantageous over frequency-based codes by
processing input features on a much shorter time-scale. For these reasons, much
recent attention has been focused on the development of supervised learning
rules for spiking neural networks that utilise a temporal coding scheme.
However, despite significant progress in this area, there still lack rules that
have a theoretical basis, and yet can be considered biologically relevant. Here
we examine the general conditions under which synaptic plasticity most
effectively takes place to support the supervised learning of a precise
temporal code. As part of our analysis we examine two spike-based learning
methods: one of which relies on an instantaneous error signal to modify
synaptic weights in a network (INST rule), and the other one on a filtered
error signal for smoother synaptic weight modifications (FILT rule). We test
the accuracy of the solutions provided by each rule with respect to their
temporal encoding precision, and then measure the maximum number of input
patterns they can learn to memorise using the precise timings of individual
spikes as an indication of their storage capacity. Our results demonstrate the
high performance of FILT in most cases, underpinned by the rule's
error-filtering mechanism, which is predicted to provide smooth convergence
towards a desired solution during learning. We also find FILT to be most
efficient at performing input pattern memorisations, and most noticeably when
patterns are identified using spikes with sub-millisecond temporal precision.
In comparison with existing work, we determine the performance of FILT to be
consistent with that of the highly efficient E-learning Chronotron, but with
the distinct advantage that FILT is also implementable as an online method for
increased biological realism.Comment: 26 pages, 10 figures, this version is published in PLoS ONE and
incorporates reviewer comment
Training Spiking Neural Networks Using Lessons From Deep Learning
The brain is the perfect place to look for inspiration to develop more
efficient neural networks. The inner workings of our synapses and neurons
provide a glimpse at what the future of deep learning might look like. This
paper serves as a tutorial and perspective showing how to apply the lessons
learnt from several decades of research in deep learning, gradient descent,
backpropagation and neuroscience to biologically plausible spiking neural
neural networks. We also explore the delicate interplay between encoding data
as spikes and the learning process; the challenges and solutions of applying
gradient-based learning to spiking neural networks; the subtle link between
temporal backpropagation and spike timing dependent plasticity, and how deep
learning might move towards biologically plausible online learning. Some ideas
are well accepted and commonly used amongst the neuromorphic engineering
community, while others are presented or justified for the first time here. A
series of companion interactive tutorials complementary to this paper using our
Python package, snnTorch, are also made available:
https://snntorch.readthedocs.io/en/latest/tutorials/index.htm
An investigation into motor pools and their applicability to a biologically inspired model of ballistic voluntary motor action
This study investigates the properties of motor pools in the human motor control
system. The simulations carried out as part of this study used two biologically
inspired neuronal models to simulate networks with properties similar to those
observed in the human motor system (Burke, 1991). The Synchronous neuronal
model developed as part of this study explicitly models the input/output spike train
and frequency relationship of each neuron. The motor pool simulations were carried
out using the INSIGHT TOO simulation software developed as part of this study.
INSIGHT TOO is a flexible neural design tool that allows the visual interactive
design of network connectivity and has the power of a node specification language
similar to that of BASIC that allows multi-layer, multi-model networks to be
simulated. The simulations have shown that the motor pools are capable of
reproducing commonly observed physiological properties during normal voluntary
reaching movements. As a result of these findings a theoretical model of ballistic
voluntary motor action was proposed called the Recruitment Model.
The Recruitment model utilises the "recruitment" principle known to exist in motor
pools and applies this distributed processing methodology to the higher levels of
motor action to explain how complex structures similar to the human skeletal
system might be controlled. A simple version of the Recruitment Model is simulated
showing an animation of a running "stick man". This simulation demonstrates some
of the principles necessary to solve problems relating to synergy formation
A bottom-up approach to emulating emotions using neuromodulation in agents
A bottom-up approach to emulating emotions is expounded in this thesis. This is intended to be useful in research where a phenomenon is to be emulated but the nature of it can not easily be defined. This approach not only advocates emulating the underlying mechanisms that are proposed to give rise to emotion in natural agents, but also advocates applying an open-mind as to what the phenomenon actually is. There is evidence to suggest that neuromodulation is inherently responsible for giving rise to emotions in natural agents and that emotions consequently modulate the behaviour of the agent. The functionality provided by neuromodulation, when applied to agents with self-organising biologically plausible neural networks, is isolated and studied. In research efforts such as this the definition should emerge from the evidence rather than postulate that the definition, derived from limited information, is correct and should be implemented. An implementation of a working definition only tells us that the definition can be implemented. It does not tell us whether that working definition is itself correct and matches the phenomenon in the real world. If this model of emotions was assumed to be true and implemented in an agent, there would be a danger of precluding implementations that could offer alternative theories as to the relevance of neuromodulation to emotions. By isolating and studying different mechanisms such as neuromodulation that are thought to give rise to emotions, theories can arise as to what emotions are and the functionality that they provide. The application of this approach concludes with a theory as to how some emotions can operate via the use of neuromodulators. The theory is explained using the concepts of dynamical systems, free-energy and entropy.EPSRC
Stirling University, Computing Science departmental gran
Bio-mimetic Spiking Neural Networks for unsupervised clustering of spatio-temporal data
Spiking neural networks aspire to mimic the brain more closely than traditional artificial neural networks. They are characterised by a spike-like activation function inspired by the shape of an action potential in biological neurons. Spiking networks remain a niche area of research, perform worse than the traditional artificial networks, and their real-world applications are limited. We hypothesised that neuroscience-inspired spiking neural networks with spike-timing-dependent plasticity demonstrate useful learning capabilities. Our objective was to identify features which play a vital role in information processing in the brain but are not commonly used in artificial networks, implement them in spiking networks without copying constraints that apply to living organisms, and to characterise their effect on data processing. The networks we created are not brain models; our approach can be labelled as artificial life. We performed a literature review and selected features such as local weight updates, neuronal sub-types, modularity, homeostasis and structural plasticity. We used the review as a guide for developing the consecutive iterations of the network, and eventually a whole evolutionary developmental system. We analysed the model’s performance on clustering of spatio-temporal data. Our results show that combining evolution and unsupervised learning leads to a faster convergence on the optimal solutions, better stability of fit solutions than each approach separately. The choice of fitness definition affects the network’s performance on fitness-related and unrelated tasks. We found that neuron type-specific weight homeostasis can be used to stabilise the networks, thus enabling longer training. We also demonstrated that networks with a rudimentary architecture can evolve developmental rules which improve their fitness. This interdisciplinary work provides contributions to three fields: it proposes novel artificial intelligence approaches, tests the possible role of the selected biological phenomena in information processing in the brain, and explores the evolution of learning in an artificial life system
Postnatal Development of the Action Potential Waveform in Cortical Neurons:A Biophysical Perspective
Many-core and heterogeneous architectures: programming models and compilation toolchains
1noL'abstract è presente nell'allegato / the abstract is in the attachmentopen677. INGEGNERIA INFORMATInopartially_openembargoed_20211002Barchi, Francesc
- …