37 research outputs found
How spiking neurons give rise to a temporal-feature map
A temporal-feature map is a topographic neuronal representation of temporal attributes of phenomena or objects that occur in the outside world. We explain the evolution of such maps by means of a spike-based Hebbian learning rule in conjunction with a presynaptically unspecific contribution in that, if a synapse changes, then all other synapses connected to the same axon change by a small fraction as well. The learning equation is solved for the case of an array of Poisson neurons. We discuss the evolution of a temporal-feature map and the synchronization of the single cells’ synaptic structures, in dependence upon the strength of presynaptic unspecific learning. We also give an upper bound for the magnitude of the presynaptic interaction by estimating its impact on the noise level of synaptic growth. Finally, we compare the results with those obtained from a learning equation for nonlinear neurons and show that synaptic structure formation may profit
from the nonlinearity
Self-Organized Criticality in Developing Neuronal Networks
Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro (DIV) of cortical cell cultures (n = 20) and find four different phases, related to their morphological maturation: An initial low-activity state (≈19 DIV) is followed by a supercritical (≈20 DIV) and then a subcritical one (≈36 DIV) until the network finally reaches stable criticality (≈58 DIV). Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro
An estimation-theoretic framework for the presentation of multiple stimuli
A framework is introduced for assessing the encoding accuracy and the discriminational ability of a population of neurons upon simultaneous presentation of multiple stimuli. Minimal square estimation errors are obtained from a Fisher information analysis in an abstract compound space comprising the features of all stimuli. Even for the simplest case of linear superposition of responses and Gaussian tuning, the symmetries in the compound space are very different from those in the case of a single stimulus. The analysis allows for a quantitative description of attentional effects and can be extended to include neural nonlinearities such as nonclassical receptive fields.
Multi-Dimensional Encoding Strategy of Spiking Neurons
Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strategy for representing one feature most accurately consists of (i) narrow tuning in the dimension to be encoded to increase the single-neuron Fisher information, and (ii) broad tuning in all other dimensions to increase the number of active neurons. Extremely narrow tuning without sufficient receptive field overlap will severely worsen the coding. This implies the existence of an optimal tuning width for the feature to be encoded. Empirically, only a subset of all stimulus features will normally be accessible. In this case, relative encoding errors can be calculated which yield a criterion for the function of a neural population based on the measured tuning curves. 1 Introduction The question..
What does a neuron talk about
Abstract. We study the coding accuracy of a population of stochastically spiking neurons that respond to di erent features of a stimulus. By using Fisher information as a measure of the encoding error, it can be shown that narrow tuning functions in one of the encoded dimensions increase the coding accuracy for this dimension as long as the active sub-population is large enough. This can be achieved by neurons that are broadly tuned in the other dimensions. If one or more stimulus features encoded by the neural population are unknown, the relative widths of the tuning curves in the remaining dimensions are a measure of the corresponding relative accuracies. This feature allows a quantitative description of the kind of information conveyed by the neural population. 1
Cortical population dynamics and psychophysics
2 The Wilson-and-Cowan model class For a basic introduction to differential equations in the context of neural systems and the class of models described here we refer the reader to the textbook by Wilson (1999)
Spatial interactions determine temporal feature integration as revealed by unmasking
Feature integration is one of the most fundamental problems in neuroscience. In a recent contribution, we showed that a trailing grating can diminish the masking effects one vernier exerts on another, preceding vernier. Here, we show that this temporal unmasking depends on neural spatial interactions related to the trailing grating. Hence, our paradigm allows us to study the spatio-temporal interactions underlying feature integratio