5,952 research outputs found
Neural population coding: combining insights from microscopic and mass signals
Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior
Multiplicative Auditory Spatial Receptive Fields Created by a Hierarchy of Population Codes
A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system
Nonlinear Hebbian learning as a unifying principle in receptive field formation
The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities
Logarithmic distributions prove that intrinsic learning is Hebbian
In this paper, we present data for the lognormal distributions of spike
rates, synaptic weights and intrinsic excitability (gain) for neurons in
various brain areas, such as auditory or visual cortex, hippocampus,
cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of
heavy-tailed, specifically lognormal, distributions for rates, weights and
gains in all brain areas examined. The difference between strongly recurrent
and feed-forward connectivity (cortex vs. striatum and cerebellum),
neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of
activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns
out to be irrelevant for this feature. Logarithmic scale distribution of
weights and gains appears to be a general, functional property in all cases
analyzed. We then created a generic neural model to investigate adaptive
learning rules that create and maintain lognormal distributions. We
conclusively demonstrate that not only weights, but also intrinsic gains, need
to have strong Hebbian learning in order to produce and maintain the
experimentally attested distributions. This provides a solution to the
long-standing question about the type of plasticity exhibited by intrinsic
excitability
Neuromorphic Learning towards Nano Second Precision
Temporal coding is one approach to representing information in spiking neural
networks. An example of its application is the location of sounds by barn owls
that requires especially precise temporal coding. Dependent upon the azimuthal
angle, the arrival times of sound signals are shifted between both ears. In
order to deter- mine these interaural time differences, the phase difference of
the signals is measured. We implemented this biologically inspired network on a
neuromorphic hardware system and demonstrate spike-timing dependent plasticity
on an analog, highly accelerated hardware substrate. Our neuromorphic
implementation enables the resolution of time differences of less than 50 ns.
On-chip Hebbian learning mechanisms select inputs from a pool of neurons which
code for the same sound frequency. Hence, noise caused by different synaptic
delays across these inputs is reduced. Furthermore, learning compensates for
variations on neuronal and synaptic parameters caused by device mismatch
intrinsic to the neuromorphic substrate.Comment: 7 pages, 7 figures, presented at IJCNN 2013 in Dallas, TX, USA. IJCNN
2013. Corrected version with updated STDP curves IJCNN 201
Input-driven components of spike-frequency adaptation can be unmasked in vivo
Spike-frequency adaptation affects the response characteristics of many sensory neurons, and different biophysical processes contribute to this phenomenon. Many cellular mechanisms underlying adaptation are triggered by the spike output of the neuron in a feedback manner (e.g., specific potassium currents that are primarily activated by the spiking activity). In contrast, other components of adaptation may be caused by, in a feedforward way, the sensory or synaptic input, which the neuron receives. Examples include viscoelasticity of mechanoreceptors, transducer adaptation in hair cells, and short-term synaptic depression. For a functional characterization of spike-frequency adaptation, it is essential to understand the dependence of adaptation on the input and output of the neuron. Here, we demonstrate how an input-driven component of adaptation can be uncovered in vivo from recordings of spike trains in an insect auditory receptor neuron, even if the total adaptation is dominated by output-driven components. Our method is based on the identification of different inputs that yield the same output and sudden switches between these inputs. In particular, we determined for different sound frequencies those intensities that are required to yield a predefined steady-state firing rate of the neuron. We then found that switching between these sound frequencies causes transient deviations of the firing rate. These firing-rate deflections are evidence of input-driven adaptation and can be used to quantify how this adaptation component affects the neural activity. Based on previous knowledge of the processes in auditory transduction, we conclude that for the investigated auditory receptor neurons, this adaptation phenomenon is of mechanical origin
An Efficient Threshold-Driven Aggregate-Label Learning Algorithm for Multimodal Information Processing
The aggregate-label learning paradigm tackles the long-standing temporary credit assignment (TCA) problem in neuroscience and machine learning, enabling spiking neural networks to learn multimodal sensory clues with delayed feedback signals. However, the existing aggregate-label learning algorithms only work for single spiking neurons, and with low learning efficiency, which limit their real-world applicability. To address these limitations, we first propose an efficient threshold-driven plasticity algorithm for spiking neurons, namely ETDP. It enables spiking neurons to generate the desired number of spikes that match the magnitude of delayed feedback signals and to learn useful multimodal sensory clues embedded within spontaneous spiking activities. Furthermore, we extend the ETDP algorithm to support multi-layer spiking neural networks (SNNs), which significantly improves the applicability of aggregate-label learning algorithms. We also validate the multi-layer ETDP learning algorithm in a multimodal computation framework for audio-visual pattern recognition. Experimental results on both synthetic and realistic datasets show significant improvements in the learning efficiency and model capacity over the existing aggregate-label learning algorithms. It, therefore, provides many opportunities for solving real-world multimodal pattern recognition tasks with spiking neural networks
Sparse Codes for Speech Predict Spectrotemporal Receptive Fields in the Inferior Colliculus
We have developed a sparse mathematical representation of speech that
minimizes the number of active model neurons needed to represent typical speech
sounds. The model learns several well-known acoustic features of speech such as
harmonic stacks, formants, onsets and terminations, but we also find more
exotic structures in the spectrogram representation of sound such as localized
checkerboard patterns and frequency-modulated excitatory subregions flanked by
suppressive sidebands. Moreover, several of these novel features resemble
neuronal receptive fields reported in the Inferior Colliculus (IC), as well as
auditory thalamus and cortex, and our model neurons exhibit the same tradeoff
in spectrotemporal resolution as has been observed in IC. To our knowledge,
this is the first demonstration that receptive fields of neurons in the
ascending mammalian auditory pathway beyond the auditory nerve can be predicted
based on coding principles and the statistical properties of recorded sounds.Comment: For Supporting Information, see PLoS website:
http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.100259
- …