1,230 research outputs found
Intrinsic gain modulation and adaptive neural coding
In many cases, the computation of a neural system can be reduced to a
receptive field, or a set of linear filters, and a thresholding function, or
gain curve, which determines the firing probability; this is known as a
linear/nonlinear model. In some forms of sensory adaptation, these linear
filters and gain curve adjust very rapidly to changes in the variance of a
randomly varying driving input. An apparently similar but previously unrelated
issue is the observation of gain control by background noise in cortical
neurons: the slope of the firing rate vs current (f-I) curve changes with the
variance of background random input. Here, we show a direct correspondence
between these two observations by relating variance-dependent changes in the
gain of f-I curves to characteristics of the changing empirical
linear/nonlinear model obtained by sampling. In the case that the underlying
system is fixed, we derive relationships relating the change of the gain with
respect to both mean and variance with the receptive fields derived from
reverse correlation on a white noise stimulus. Using two conductance-based
model neurons that display distinct gain modulation properties through a simple
change in parameters, we show that coding properties of both these models
quantitatively satisfy the predicted relationships. Our results describe how
both variance-dependent gain modulation and adaptive neural computation result
from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio
Adaptive Filtering Enhances Information Transmission in Visual Cortex
Sensory neuroscience seeks to understand how the brain encodes natural
environments. However, neural coding has largely been studied using simplified
stimuli. In order to assess whether the brain's coding strategy depend on the
stimulus ensemble, we apply a new information-theoretic method that allows
unbiased calculation of neural filters (receptive fields) from responses to
natural scenes or other complex signals with strong multipoint correlations. In
the cat primary visual cortex we compare responses to natural inputs with those
to noise inputs matched for luminance and contrast. We find that neural filters
adaptively change with the input ensemble so as to increase the information
carried by the neural response about the filtered stimulus. Adaptation affects
the spatial frequency composition of the filter, enhancing sensitivity to
under-represented frequencies in agreement with optimal encoding arguments.
Adaptation occurs over 40 s to many minutes, longer than most previously
reported forms of adaptation.Comment: 20 pages, 11 figures, includes supplementary informatio
Optomotor Swimming in Larval Zebrafish Is Driven by Global Whole-Field Visual Motion and Local Light-Dark Transitions
Stabilizing gaze and position within an environment constitutes an important task for the nervous system of many animals. The optomotor response (OMR) is a reflexive behavior, present across many species, in which animals move in the direction of perceived whole-field visual motion, therefore stabilizing themselves with respect to the visual environment. Although the OMR has been extensively used to probe visuomotor neuronal circuitry, the exact visual cues that elicit the behavior remain unidentified. In this study, we use larval zebrafish to identify spatio-temporal visual features that robustly elicit forward OMR swimming. These cues consist of a local, forward-moving, off edge together with on/off symmetric, similarly directed, global motion. Imaging experiments reveal neural units specifically activated by the forward-moving light-dark transition. We conclude that the OMR is driven not just by whole-field motion but by the interplay between global and local visual stimuli, where the latter exhibits a strong light-dark asymmetry
Fast, scalable, Bayesian spike identification for multi-electrode arrays
We present an algorithm to identify individual neural spikes observed on
high-density multi-electrode arrays (MEAs). Our method can distinguish large
numbers of distinct neural units, even when spikes overlap, and accounts for
intrinsic variability of spikes from each unit. As MEAs grow larger, it is
important to find spike-identification methods that are scalable, that is, the
computational cost of spike fitting should scale well with the number of units
observed. Our algorithm accomplishes this goal, and is fast, because it
exploits the spatial locality of each unit and the basic biophysics of
extracellular signal propagation. Human intervention is minimized and
streamlined via a graphical interface. We illustrate our method on data from a
mammalian retina preparation and document its performance on simulated data
consisting of spikes added to experimentally measured background noise. The
algorithm is highly accurate
The iso-response method
Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments
Second Order Dimensionality Reduction Using Minimum and Maximum Mutual Information Models
Conventional methods used to characterize multidimensional neural feature selectivity, such as spike-triggered covariance (STC) or maximally informative dimensions (MID), are limited to Gaussian stimuli or are only able to identify a small number of features due to the curse of dimensionality. To overcome these issues, we propose two new dimensionality reduction methods that use minimum and maximum information models. These methods are information theoretic extensions of STC that can be used with non-Gaussian stimulus distributions to find relevant linear subspaces of arbitrary dimensionality. We compare these new methods to the conventional methods in two ways: with biologically-inspired simulated neurons responding to natural images and with recordings from macaque retinal and thalamic cells responding to naturalistic time-varying stimuli. With non-Gaussian stimuli, the minimum and maximum information methods significantly outperform STC in all cases, whereas MID performs best in the regime of low dimensional feature spaces
- …