139 research outputs found
Temporal Map Formation in the Barn Owl’s Brain
Barn owls provide an experimentally well-specified example of a temporal map, a neuronal representation of the outside world in the brain by means of time. Their laminar nucleus exhibits a place code of interaural time differences, a cue which is used to determine the azimuthal location of a sound stimulus, e.g., prey. We analyze a model of synaptic plasticity that explains the formation of such a representation in the young bird and show how in a large parameter regime a combination of local and nonlocal synaptic plasticity yields the temporal map as found experimentally. Our analysis includes the effect of nonlinearities as well as the influence of neuronal noise
Renewal theory of coupled neuronal pools
A theory is provided to analyze the dynamics of delay-coupled pools of spiking neurons based on stability
analysis of stationary firing. Transitions between stable and unstable regimes can be predicted by bifurcation analysis of the underlying integral dynamics. Close to the bifurcation point the network exhibits slowly changingactivities and allows for slow collective phenomena like continuous attractors
Adaptation of Binaural Processing in the Adult Brainstem Induced by Ambient Noise
Interaural differences in stimulus intensity and timing are major cues for sound localization. In mammals, these cues are first processed in the lateral and medial superior olive by interaction of excitatory and inhibitory synaptic inputs from ipsi- and contralateral cochlear nucleus neurons. To preserve sound localization acuity following changes in the acoustic environment, the processing of these binaural cues needs neuronal adaptation. Recent studies have shown that binaural sensitivity adapts to stimulation history within milliseconds, but the actual extent of binaural adaptation is unknown. In the current study, we investigated long-term effects on binaural sensitivity using extracellular in vivo recordings from single neurons in the dorsal nucleus of the lateral lemniscus that inherit their binaural properties directly from the lateral and medial superior olives. In contrast to most previous studies, we used a noninvasive approach to influence this processing. Adult gerbils were exposed for 2 weeks to moderate noise with no stable binaural cue. We found monaural response properties to be unaffected by this measure. However, neuronal sensitivity to binaural cues was reversibly altered for a few days. Computational models of sensitivity to interaural time and level differences suggest that upregulation of inhibition in the superior olivary complex can explain the electrophysiological data
Hippocampal Remapping Is Constrained by Sparseness rather than Capacity
Grid cells in the medial entorhinal cortex encode space with firing fields that are arranged on the nodes of spatial hexagonal lattices. Potential candidates to read out the space information of this grid code and to combine it with other sensory cues are hippocampal place cells. In this paper, we investigate a population of grid cells providing feed-forward input to place cells. The capacity of the underlying synaptic transformation is determined by both spatial acuity and the number of different spatial environments that can be represented. The codes for different environments arise from phase shifts of the periodical entorhinal cortex patterns that induce a global remapping of hippocampal place fields, i.e., a new random assignment of place fields for each environment. If only a single environment is encoded, the grid code can be read out at high acuity with only few place cells. A surplus in place cells can be used to store a space code for more environments via remapping. The number of stored environments can be increased even more efficiently by stronger recurrent inhibition and by partitioning the place cell population such that learning affects only a small fraction of them in each environment. We find that the spatial decoding acuity is much more resilient to multiple remappings than the sparseness of the place code. Since the hippocampal place code is sparse, we thus conclude that the projection from grid cells to the place cells is not using its full capacity to transfer space information. Both populations may encode different aspects of space
Inhomogeneous sparseness leads to dynamic instability during sequence memory recall in a recurrent neural network model.
Theoretical models of associative memory generally assume most of their parameters to be homogeneous across the network. Conversely, biological neural networks exhibit high variability of structural as well as activity parameters. In this paper, we extend the classical clipped learning rule by Willshaw to networks with inhomogeneous sparseness, i.e., the number of active neurons may vary across memory items. We evaluate this learning rule for sequence memory networks with instantaneous feedback inhibition and show that little surprisingly, memory capacity degrades with increased variability in sparseness. The loss of capacity, however, is very small for short sequences of less than about 10 associations. Most interestingly, we further show that, due to feedback inhibition, too large patterns are much less detrimental for memory capacity than too small patterns
Inhomogeneous sparseness leads to dynamic instability during sequence memory recall in a recurrent neural network model.
Theoretical models of associative memory generally assume most of their parameters to be homogeneous across the network. Conversely, biological neural networks exhibit high variability of structural as well as activity parameters. In this paper, we extend the classical clipped learning rule by Willshaw to networks with inhomogeneous sparseness, i.e., the number of active neurons may vary across memory items. We evaluate this learning rule for sequence memory networks with instantaneous feedback inhibition and show that little surprisingly, memory capacity degrades with increased variability in sparseness. The loss of capacity, however, is very small for short sequences of less than about 10 associations. Most interestingly, we further show that, due to feedback inhibition, too large patterns are much less detrimental for memory capacity than too small patterns
Learning to Discriminate Through Long-Term Changes of Dynamical Synaptic Transmission
Short-term synaptic plasticity is modulated by long-term synaptic
changes. There is, however, no general agreement on the computational
role of this interaction. Here, we derive a learning rule for the release
probability and the maximal synaptic conductance in a circuit model
with combined recurrent and feedforward connections that allows learning
to discriminate among natural inputs. Short-term synaptic plasticity
thereby provides a nonlinear expansion of the input space of a linear
classifier, whereas the random recurrent network serves to decorrelate
the expanded input space. Computer simulations reveal that the twofold
increase in the number of input dimensions through short-term synaptic
plasticity improves the performance of a standard perceptron up to 100%.
The distributions of release probabilities and maximal synaptic conductances
at the capacity limit strongly depend on the balance between excitation
and inhibition. The model also suggests a new computational
interpretation of spikes evoked by stimuli outside the classical receptive
field. These neuronal activitiesmay reflect decorrelation of the expanded
stimulus space by intracortical synaptic connections
- …