866 research outputs found
Slowness: An Objective for Spike-Timing-Dependent Plasticity?
Slow Feature Analysis (SFA) is an efficient algorithm for
learning input-output functions that extract the most slowly varying features from a quickly varying signal. It
has been successfully applied to the unsupervised learning
of translation-, rotation-, and other invariances in a
model of the visual system, to the learning of complex cell
receptive fields, and, combined with a sparseness
objective, to the self-organized formation of place cells
in a model of the hippocampus.
In order to arrive at a biologically more plausible implementation of this learning rule, we consider analytically how SFA could be realized in simple linear continuous and spiking model neurons. It turns out that for the continuous model neuron SFA can be implemented by means of a modified version of standard Hebbian learning. In this framework we provide a connection to the trace learning rule for invariance learning. We then show that for Poisson neurons spike-timing-dependent plasticity (STDP) with a specific learning window can learn the same weight distribution as SFA. Surprisingly, we find that the appropriate learning rule reproduces the typical STDP learning window. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a completely novel interpretation for the functional role of spike-timing-dependent plasticity in physiological neurons
Regulation of circuit organization and function through inhibitory synaptic plasticity
Diverse inhibitory neurons in the mammalian brain shape circuit connectivity and dynamics through mechanisms of synaptic plasticity. Inhibitory plasticity can establish excitation/inhibition (E/I) balance, control neuronal firing, and affect local calcium concentration, hence regulating neuronal activity at the network, single neuron, and dendritic level. Computational models can synthesize multiple experimental results and provide insight into how inhibitory plasticity controls circuit dynamics and sculpts connectivity by identifying phenomenological learning rules amenable to mathematical analysis. We highlight recent studies on the role of inhibitory plasticity in modulating excitatory plasticity, forming structured networks underlying memory formation and recall, and implementing adaptive phenomena and novelty detection. We conclude with experimental and modeling progress on the role of interneuron-specific plasticity in circuit computation and context-dependent learning
Independent Component Analysis in Spiking Neurons
Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition
Recommended from our members
On the Role of Sensory Cancellation and Corollary Discharge in Neural Coding and Behavior
Studies of cerebellum-like circuits in fish have demonstrated that synaptic plasticity shapes the motor corollary discharge responses of granule cells into highly-specific predictions of self- generated sensory input. However, the functional significance of such predictions, known as negative images, has not been directly tested. Here we provide evidence for improvements in neural coding and behavioral detection of prey-like stimuli due to negative images. In addition, we find that manipulating synaptic plasticity leads to specific changes in circuit output that disrupt neural coding and detection of prey-like stimuli. These results link synaptic plasticity, neural coding, and behavior and also provide a circuit-level account of how combining external sensory input with internally-generated predictions enhances sensory processing. In addition, the mammalian dorsal cochlear nucleus (DCN) integrates auditory nerve input with a diverse array of sensory and motor signals processed within circuity similar to the cerebellum. Yet how the DCN contributes to early auditory processing has been a longstanding puzzle. Using electrophysiological recordings in mice during licking behavior we show that DCN neurons are largely unaffected by self-generated sounds while remaining sensitive to external acoustic stimuli. Recordings in deafened mice, together with neural activity manipulations, indicate that self-generated sounds are cancelled by non-auditory signals conveyed by mossy fibers. In addition, DCN neurons exhibit gradual reductions in their responses to acoustic stimuli that are temporally correlated with licking. Together, these findings suggest that DCN may act as an adaptive filter for cancelling self-generated sounds. Adaptive filtering has been established previously for cerebellum-like sensory structures in fish suggesting a conserved function for such structures across vertebrates
Auditory-Somatosensory Integration in Dorsal Cochlear Nucleus Mediates Normal and Phantom Sound Perception.
The dorsal cochlear nucleus (DCN) is the first auditory brainstem nucleus that processes and relays sensory information from multiple sensory modalities to higher auditory brain structures. Converging somatosensory and auditory inputs are integrated by bimodal DCN fusiform neurons, which use somatosensory context for improved auditory coding. Furthermore, phantom sound perception, or tinnitus, can be modulated or induced by somatosensory stimuli including facial pressure and has been linked to somatosensory-auditory processing in DCN. I present three in vivo neurophysiology studies in guinea pigs investigating the role of multisensory mechanisms in normal and tinnitus models.
1) DCN fusiform cells respond to sound with characteristic spike-timing patterns that are controlled by rapidly inactivating potassium conductances. I demonstrated here that somatosensory stimulation alters sound-evoked firing rates and temporal representations of sound for tens of milliseconds through synaptic modulation of intrinsic excitability.
2) Bimodal plasticity consists of alterations of sound-evoked responses for up to two hours after paired somatosensory-auditory stimulation. By varying the interval and order between sound and somatosensory stimuli, I demonstrated stimulus-timing dependent bimodal plasticity that implicates spike-timing dependent synaptic plasticity (STDP) as the underlying mechanism. The timing rules and time course of stimulus-timing dependent plasticity closely mimic those of STDP at synapses conveying somatosensory information to the DCN. These results suggest the DCN performs STDP-dependent adaptive processing such as suppression of body-generated sounds.
3) Finally, I assessed stimulus-timing dependence of bimodal plasticity in a tinnitus model. Guinea pigs were exposed to a narrowband noise that produced temporary shifts in auditory brainstem response thresholds and is known to produce tinnitus. Sixty percent of guinea pigs developed tinnitus according to behavioral testing by gap-induced prepulse inhibition of the acoustic startle response. Bimodal plasticity timing rules in animals with verified tinnitus were broader and more likely to be anti-Hebbian than those in sham animals or noise-exposed animals that did not develop tinnitus. Furthermore, exposed animals with tinnitus had weaker suppressive responses than either sham animals or exposed animals without tinnitus. These results suggest tinnitus development is linked to STDP, presenting a potential target for pharmacological or neuromodulatory tinnitus therapies.PhDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/97934/1/skoehler_1.pd
Synaptic motor adaptation: A three-factor learning rule for adaptive robotic control in spiking neural networks
Legged robots operating in real-world environments must possess the ability
to rapidly adapt to unexpected conditions, such as changing terrains and
varying payloads. This paper introduces the Synaptic Motor Adaptation (SMA)
algorithm, a novel approach to achieving real-time online adaptation in
quadruped robots through the utilization of neuroscience-derived rules of
synaptic plasticity with three-factor learning. To facilitate rapid adaptation,
we meta-optimize a three-factor learning rule via gradient descent to adapt to
uncertainty by approximating an embedding produced by privileged information
using only locally accessible onboard sensing data. Our algorithm performs
similarly to state-of-the-art motor adaptation algorithms and presents a clear
path toward achieving adaptive robotics with neuromorphic hardware
Learning cortical hierarchies with temporal Hebbian updates
A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning
Self-Organization of Spiking Neural Networks for Visual Object Recognition
On one hand, the visual system has the ability to differentiate between very similar
objects. On the other hand, we can also recognize the same object in images that vary
drastically, due to different viewing angle, distance, or illumination. The ability to
recognize the same object under different viewing conditions is called invariant object
recognition. Such object recognition capabilities are not immediately available after
birth, but are acquired through learning by experience in the visual world.
In many viewing situations different views of the same object are seen in a tem-
poral sequence, e.g. when we are moving an object in our hands while watching it.
This creates temporal correlations between successive retinal projections that can be
used to associate different views of the same object. Theorists have therefore pro-
posed a synaptic plasticity rule with a built-in memory trace (trace rule).
In this dissertation I present spiking neural network models that offer possible
explanations for learning of invariant object representations. These models are based
on the following hypotheses:
1. Instead of a synaptic trace rule, persistent firing of recurrently connected groups
of neurons can serve as a memory trace for invariance learning.
2. Short-range excitatory lateral connections enable learning of self-organizing
topographic maps that represent temporal as well as spatial correlations.
3. When trained with sequences of object views, such a network can learn repre-
sentations that enable invariant object recognition by clustering different views
of the same object within a local neighborhood.
4. Learning of representations for very similar stimuli can be enabled by adaptive
inhibitory feedback connections.
The study presented in chapter 3.1 details an implementation of a spiking neural
network to test the first three hypotheses. This network was tested with stimulus
sets that were designed in two feature dimensions to separate the impact of tempo-
ral and spatial correlations on learned topographic maps. The emerging topographic
maps showed patterns that were dependent on the temporal order of object views
during training. Our results show that pooling over local neighborhoods of the to-
pographic map enables invariant recognition.
Chapter 3.2 focuses on the fourth hypothesis. There we examine how the adaptive
feedback inhibition (AFI) can improve the ability of a network to discriminate between
very similar patterns. The results show that with AFI learning is faster, and the
network learns selective representations for stimuli with higher levels of overlap
than without AFI.
Results of chapter 3.1 suggest a functional role for topographic object representa-
tions that are known to exist in the inferotemporal cortex, and suggests a mechanism
for the development of such representations. The AFI model implements one aspect
of predictive coding: subtraction of a prediction from the actual input of a system. The
successful implementation in a biologically plausible network of spiking neurons
shows that predictive coding can play a role in cortical circuits
- …