20,947 research outputs found
Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface
Experiments show that spike-triggered stimulation performed with
Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen
connections between separate neural sites in motor cortex (MC). What are the
neuronal mechanisms responsible for these changes and how does targeted
stimulation by a BBCI shape population-level synaptic connectivity? The present
work describes a recurrent neural network model with probabilistic spiking
mechanisms and plastic synapses capable of capturing both neural and synaptic
activity statistics relevant to BBCI conditioning protocols. When spikes from a
neuron recorded at one MC site trigger stimuli at a second target site after a
fixed delay, the connections between sites are strengthened for spike-stimulus
delays consistent with experimentally derived spike time dependent plasticity
(STDP) rules. However, the relationship between STDP mechanisms at the level of
networks, and their modification with neural implants remains poorly
understood. Using our model, we successfully reproduces key experimental
results and use analytical derivations, along with novel experimental data. We
then derive optimal operational regimes for BBCIs, and formulate predictions
concerning the efficacy of spike-triggered stimulation in different regimes of
cortical activity.Comment: 35 pages, 9 figure
Variational Learning for Recurrent Spiking Networks
We derive a plausible learning rule updating the synaptic efficacies for feedforward, feedback and lateral connections between observed and latent neurons. Operating in the context of a generative model for distributions of spike sequences, the learning mechanism is derived from variational inference principles. The synaptic plasticity rules found are interesting in that they are strongly reminiscent of experimentally found results on Spike Time Dependent Plasticity, and in that they differ for excitatory and inhibitory neurons. A simulation confirms the method's applicability to learning both stationary and temporal spike pattern
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding
Precise spike timing as a means to encode information in neural networks is
biologically supported, and is advantageous over frequency-based codes by
processing input features on a much shorter time-scale. For these reasons, much
recent attention has been focused on the development of supervised learning
rules for spiking neural networks that utilise a temporal coding scheme.
However, despite significant progress in this area, there still lack rules that
have a theoretical basis, and yet can be considered biologically relevant. Here
we examine the general conditions under which synaptic plasticity most
effectively takes place to support the supervised learning of a precise
temporal code. As part of our analysis we examine two spike-based learning
methods: one of which relies on an instantaneous error signal to modify
synaptic weights in a network (INST rule), and the other one on a filtered
error signal for smoother synaptic weight modifications (FILT rule). We test
the accuracy of the solutions provided by each rule with respect to their
temporal encoding precision, and then measure the maximum number of input
patterns they can learn to memorise using the precise timings of individual
spikes as an indication of their storage capacity. Our results demonstrate the
high performance of FILT in most cases, underpinned by the rule's
error-filtering mechanism, which is predicted to provide smooth convergence
towards a desired solution during learning. We also find FILT to be most
efficient at performing input pattern memorisations, and most noticeably when
patterns are identified using spikes with sub-millisecond temporal precision.
In comparison with existing work, we determine the performance of FILT to be
consistent with that of the highly efficient E-learning Chronotron, but with
the distinct advantage that FILT is also implementable as an online method for
increased biological realism.Comment: 26 pages, 10 figures, this version is published in PLoS ONE and
incorporates reviewer comment
On-chip Few-shot Learning with Surrogate Gradient Descent on a Neuromorphic Processor
Recent work suggests that synaptic plasticity dynamics in biological models
of neurons and neuromorphic hardware are compatible with gradient-based
learning (Neftci et al., 2019). Gradient-based learning requires iterating
several times over a dataset, which is both time-consuming and constrains the
training samples to be independently and identically distributed. This is
incompatible with learning systems that do not have boundaries between training
and inference, such as in neuromorphic hardware. One approach to overcome these
constraints is transfer learning, where a portion of the network is pre-trained
and mapped into hardware and the remaining portion is trained online. Transfer
learning has the advantage that pre-training can be accelerated offline if the
task domain is known, and few samples of each class are sufficient for learning
the target task at reasonable accuracies. Here, we demonstrate on-line
surrogate gradient few-shot learning on Intel's Loihi neuromorphic research
processor using features pre-trained with spike-based gradient
backpropagation-through-time. Our experimental results show that the Loihi chip
can learn gestures online using a small number of shots and achieve results
that are comparable to the models simulated on a conventional processor
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
Slowness: An Objective for Spike-Timing-Dependent Plasticity?
Slow Feature Analysis (SFA) is an efficient algorithm for
learning input-output functions that extract the most slowly varying features from a quickly varying signal. It
has been successfully applied to the unsupervised learning
of translation-, rotation-, and other invariances in a
model of the visual system, to the learning of complex cell
receptive fields, and, combined with a sparseness
objective, to the self-organized formation of place cells
in a model of the hippocampus.
In order to arrive at a biologically more plausible implementation of this learning rule, we consider analytically how SFA could be realized in simple linear continuous and spiking model neurons. It turns out that for the continuous model neuron SFA can be implemented by means of a modified version of standard Hebbian learning. In this framework we provide a connection to the trace learning rule for invariance learning. We then show that for Poisson neurons spike-timing-dependent plasticity (STDP) with a specific learning window can learn the same weight distribution as SFA. Surprisingly, we find that the appropriate learning rule reproduces the typical STDP learning window. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a completely novel interpretation for the functional role of spike-timing-dependent plasticity in physiological neurons
- …