39,619 research outputs found
The effect of neural adaptation of population coding accuracy
Most neurons in the primary visual cortex initially respond vigorously when a
preferred stimulus is presented, but adapt as stimulation continues. The
functional consequences of adaptation are unclear. Typically a reduction of
firing rate would reduce single neuron accuracy as less spikes are available
for decoding, but it has been suggested that on the population level,
adaptation increases coding accuracy. This question requires careful analysis
as adaptation not only changes the firing rates of neurons, but also the neural
variability and correlations between neurons, which affect coding accuracy as
well. We calculate the coding accuracy using a computational model that
implements two forms of adaptation: spike frequency adaptation and synaptic
adaptation in the form of short-term synaptic plasticity. We find that the net
effect of adaptation is subtle and heterogeneous. Depending on adaptation
mechanism and test stimulus, adaptation can either increase or decrease coding
accuracy. We discuss the neurophysiological and psychophysical implications of
the findings and relate it to published experimental data.Comment: 35 pages, 8 figure
Efficient Computation in Adaptive Artificial Spiking Neural Networks
Artificial Neural Networks (ANNs) are bio-inspired models of neural
computation that have proven highly effective. Still, ANNs lack a natural
notion of time, and neural units in ANNs exchange analog values in a
frame-based manner, a computationally and energetically inefficient form of
communication. This contrasts sharply with biological neurons that communicate
sparingly and efficiently using binary spikes. While artificial Spiking Neural
Networks (SNNs) can be constructed by replacing the units of an ANN with
spiking neurons, the current performance is far from that of deep ANNs on hard
benchmarks and these SNNs use much higher firing rates compared to their
biological counterparts, limiting their efficiency. Here we show how spiking
neurons that employ an efficient form of neural coding can be used to construct
SNNs that match high-performance ANNs and exceed state-of-the-art in SNNs on
important benchmarks, while requiring much lower average firing rates. For
this, we use spike-time coding based on the firing rate limiting adaptation
phenomenon observed in biological spiking neurons. This phenomenon can be
captured in adapting spiking neuron models, for which we derive the effective
transfer function. Neural units in ANNs trained with this transfer function can
be substituted directly with adaptive spiking neurons, and the resulting
Adaptive SNNs (AdSNNs) can carry out inference in deep neural networks using up
to an order of magnitude fewer spikes compared to previous SNNs. Adaptive
spike-time coding additionally allows for the dynamic control of neural coding
precision: we show how a simple model of arousal in AdSNNs further halves the
average required firing rate and this notion naturally extends to other forms
of attention. AdSNNs thus hold promise as a novel and efficient model for
neural computation that naturally fits to temporally continuous and
asynchronous applications
Perception of categories: from coding efficiency to reaction times
Reaction-times in perceptual tasks are the subject of many experimental and
theoretical studies. With the neural decision making process as main focus,
most of these works concern discrete (typically binary) choice tasks, implying
the identification of the stimulus as an exemplar of a category. Here we
address issues specific to the perception of categories (e.g. vowels, familiar
faces, ...), making a clear distinction between identifying a category (an
element of a discrete set) and estimating a continuous parameter (such as a
direction). We exhibit a link between optimal Bayesian decoding and coding
efficiency, the latter being measured by the mutual information between the
discrete category set and the neural activity. We characterize the properties
of the best estimator of the likelihood of the category, when this estimator
takes its inputs from a large population of stimulus-specific coding cells.
Adopting the diffusion-to-bound approach to model the decisional process, this
allows to relate analytically the bias and variance of the diffusion process
underlying decision making to macroscopic quantities that are behaviorally
measurable. A major consequence is the existence of a quantitative link between
reaction times and discrimination accuracy. The resulting analytical expression
of mean reaction times during an identification task accounts for empirical
facts, both qualitatively (e.g. more time is needed to identify a category from
a stimulus at the boundary compared to a stimulus lying within a category), and
quantitatively (working on published experimental data on phoneme
identification tasks)
Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model
The occurrence of sleep passed through the evolutionary sieve and is
widespread in animal species. Sleep is known to be beneficial to cognitive and
mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the
importance of the phenomenon, a complete understanding of its functions and
underlying mechanisms is still lacking. In this paper, we show interesting
effects of deep-sleep-like slow oscillation activity on a simplified
thalamo-cortical model which is trained to encode, retrieve and classify images
of handwritten digits. During slow oscillations,
spike-timing-dependent-plasticity (STDP) produces a differential homeostatic
process. It is characterized by both a specific unsupervised enhancement of
connections among groups of neurons associated to instances of the same class
(digit) and a simultaneous down-regulation of stronger synapses created by the
training. This hierarchical organization of post-sleep internal representations
favours higher performances in retrieval and classification tasks. The
mechanism is based on the interaction between top-down cortico-thalamic
predictions and bottom-up thalamo-cortical projections during deep-sleep-like
slow oscillations. Indeed, when learned patterns are replayed during sleep,
cortico-thalamo-cortical connections favour the activation of other neurons
coding for similar thalamic inputs, promoting their association. Such mechanism
hints at possible applications to artificial learning systems.Comment: 11 pages, 5 figures, v5 is the final version published on Scientific
Reports journa
Optimal Population Coding, Revisited
Cortical circuits perform the computations underlying rapid perceptual decisions within a few dozen milliseconds with each neuron emitting only a few spikes. Under these conditions, the theoretical analysis of neural population codes is challenging, as the most commonly used theoretical tool – Fisher information – can lead to erroneous conclusions about the optimality of different coding schemes. Here we revisit the effect of tuning function width and correlation structure on neural population codes based on ideal observer analysis in both a discrimination and reconstruction task. We show that the optimal tuning function width and the optimal correlation structure in both paradigms strongly depend on the available decoding time in a very similar way. In contrast, population codes optimized for Fisher information do not depend on decoding time and are severely suboptimal when only few spikes are available. In addition, we use the neurometric functions of the ideal observer in the classification task to investigate the differential coding properties of these Fisher-optimal codes for fine and coarse discrimination. We find that the discrimination error for these codes does not decrease to zero with increasing population size, even in simple coarse discrimination tasks. Our results suggest that quite different population codes may be optimal for rapid decoding in cortical computations than those inferred from the optimization of Fisher information
Logarithmic distributions prove that intrinsic learning is Hebbian
In this paper, we present data for the lognormal distributions of spike
rates, synaptic weights and intrinsic excitability (gain) for neurons in
various brain areas, such as auditory or visual cortex, hippocampus,
cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of
heavy-tailed, specifically lognormal, distributions for rates, weights and
gains in all brain areas examined. The difference between strongly recurrent
and feed-forward connectivity (cortex vs. striatum and cerebellum),
neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of
activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns
out to be irrelevant for this feature. Logarithmic scale distribution of
weights and gains appears to be a general, functional property in all cases
analyzed. We then created a generic neural model to investigate adaptive
learning rules that create and maintain lognormal distributions. We
conclusively demonstrate that not only weights, but also intrinsic gains, need
to have strong Hebbian learning in order to produce and maintain the
experimentally attested distributions. This provides a solution to the
long-standing question about the type of plasticity exhibited by intrinsic
excitability
- …