229 research outputs found

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    A simple model for low variability in neural spike trains

    Full text link
    Neural noise sets a limit to information transmission in sensory systems. In several areas, the spiking response (to a repeated stimulus) has shown a higher degree of regularity than predicted by a Poisson process. However, a simple model to explain this low variability is still lacking. Here we introduce a new model, with a correction to Poisson statistics, which can accurately predict the regularity of neural spike trains in response to a repeated stimulus. The model has only two parameters, but can reproduce the observed variability in retinal recordings in various conditions. We show analytically why this approximation can work. In a model of the spike emitting process where a refractory period is assumed, we derive that our simple correction can well approximate the spike train statistics over a broad range of firing rates. Our model can be easily plugged to stimulus processing models, like Linear-nonlinear model or its generalizations, to replace the Poisson spike train hypothesis that is commonly assumed. It estimates the amount of information transmitted much more accurately than Poisson models in retinal recordings. Thanks to its simplicity this model has the potential to explain low variability in other areas

    The Neural Mechanisms Underlying Visual Target Search

    Get PDF
    The task of finding specific objects and switching between targets is ubiquitous in everyday life. Searching for a particular object requires our brains to activate and maintain a representation of the target (working memory), identify each encountered object (object recognition), and determine whether the currently viewed object matches the sought target (decision making). The comparison of working memory and visual information is thought to happen via feedback of target information from higher-order brain areas to the ventral visual pathway. However, what is exactly represented by these areas and how do they implement this comparison still remains unknown. To investigate these questions, we employed a combined approach involving electrophysiology experiments and computational modeling. In particular, we recorded neural responses in inferotemporal (IT) and perirhinal (PRH) cortex as monkeys performed a visual target search task, and we adopted population-based read-outs to measure the amount and format of information contained in these neural populations. In Chapter 2 we report that the total amount of target match information was matched in IT and PRH, but this information was contained in a more explicit (i.e. linearly separable) format in PRH. These results suggest that PRH implements an untangling computation to reformat its inputs from IT. Consistent with this hypothesis, a simple linear-nonlinear model was sufficient to capture the transformation between the two areas. In Chapter 3, we report that the untangling computation in PRH takes time to evolve. While this type of dynamic reformatting is normally attributed to complex recurrent circuits, here we demonstrated that this phenomenon could be accounted by the same instantaneous linear-nonlinear model presented in Chapter 2. This counterintuitive finding was due to the existence of non-stationarities in the IT neural representation. Finally, in Chapter 4 we completely describe a novel set of methods that we developed and applied in Chapters 2 and 3 to quantify the task-specific signals contained in the heterogeneous neural responses in IT and PRH, and to relate these signals to measures of task performance. Together, this body of work revealed a previously unknown untangling computation in PRH during visual search, and demonstrated that a feed-forward linear-nonlinear model is sufficient to describe this computation

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure

    Retinal adaptation to spatial correlations

    Get PDF
    The classical center-surround retinal ganglion cell receptive field is thought to remove the strong spatial correlations in natural scenes, enabling efficient use of limited bandwidth. While early studies with drifting gratings reported robust surrounds (Enroth-Cugell and Robson, 1966), recent measurements with white noise reveal weak surrounds (Chichilnisky and Kalmar, 2002). This might be evidence for dynamical weakening of the retinal surround in response to decreased spatial correlations, which would be predicted by efficient coding theory. Such adaptation is reported in LGN (Lesica et al., 2007), but whether the retina also adapts to correlations is unknown. 

We tested for adaptation by recording simultaneously from ~40 ganglion cells on a multi-electrode array while presenting white and exponentially correlated checkerboards and strips. Measuring from ~200 cells responding to 90 minutes each of white and correlated stimuli, we were able to extract precise spatiotemporal receptive fields (STRFs). We found that a difference-of-Gaussians was not a good fit and the surround was generally displaced from the center. Thus, to assess surround strength we found the center and surround regions and the total weight on the pixels in each region. The relative surround strength was then defined as the ratio of surround weight to center weight. Surprisingly, we found that the majority of recorded cells have a stronger surround under white noise than under correlated noise (p<.05), contrary to naive expectation from theory. The conclusion was robust to different methods of extracting STRFs and persisted with checkerboard and strip stimuli.

To test, without assuming a model, whether the retina decorrelates stimuli, we also measured the pairwise correlations between spike trains of simultaneously recorded neurons under three conditions: white checkerboard, exponentially correlated noise, and scale-free noise. The typical amount of pairwise correlation increased with extent of input correlation, in line with our STRF measurements

    Extending the Occupancy Grid Concept for Low-Cost Sensor Based SLAM

    Get PDF
    The simultaneous localization and mapping problem is approached by using an ultrasound sensor and wheel encoders. To be able to account for the low precision inherent in ultrasound sensors, the occupancy grid notion is extended. The extension takes into consideration with which angle the sensor is pointing, to compensate for the issue that an object is not necessarily detectable from all position due to deficiencies in how ultrasonic range sensors work. Also, a mixed linear/nonlinear model is derived for future use in Rao-Blackwellized particle smoothing

    Adaptive Filtering Enhances Information Transmission in Visual Cortex

    Full text link
    Sensory neuroscience seeks to understand how the brain encodes natural environments. However, neural coding has largely been studied using simplified stimuli. In order to assess whether the brain's coding strategy depend on the stimulus ensemble, we apply a new information-theoretic method that allows unbiased calculation of neural filters (receptive fields) from responses to natural scenes or other complex signals with strong multipoint correlations. In the cat primary visual cortex we compare responses to natural inputs with those to noise inputs matched for luminance and contrast. We find that neural filters adaptively change with the input ensemble so as to increase the information carried by the neural response about the filtered stimulus. Adaptation affects the spatial frequency composition of the filter, enhancing sensitivity to under-represented frequencies in agreement with optimal encoding arguments. Adaptation occurs over 40 s to many minutes, longer than most previously reported forms of adaptation.Comment: 20 pages, 11 figures, includes supplementary informatio
    corecore