250 research outputs found

    Decoding the activity of neuronal populations in macaque primary visual cortex

    Get PDF
    Visual function depends on the accuracy of signals carried by visual cortical neurons. Combining information across neurons should improve this accuracy because single neuron activity is variable. We examined the reliability of information inferred from populations of simultaneously recorded neurons in macaque primary visual cortex. We considered a decoding framework that computes the likelihood of visual stimuli from a pattern of population activity by linearly combining neuronal responses and tested this framework for orientation estimation and discrimination. We derived a simple parametric decoder assuming neuronal independence and a more sophisticated empirical decoder that learned the structure of the measured neuronal response distributions, including their correlated variability. The empirical decoder used the structure of these response distributions to perform better than its parametric variant, indicating that their structure contains critical information for sensory decoding. These results show how neuronal responses can best be used to inform perceptual decision-making

    Integration of sensory evidence in motion discrimination

    Get PDF
    To make perceptual judgments, the brain must decode the responses of sensory cortical neurons. The direction of visual motion is represented by the activity of direction-selective neurons. Because these neurons are often broadly tuned and their responses are inherently variable, the brain must appropriately integrate their responses to infer the direction of motion reliably. The optimal integration strategy is task dependent. For coarse direction discriminations, neurons tuned to the directions of interest provide the most reliable information, but for fine discriminations, neurons with preferred directions displaced away from the target directions are more informative. We measured coarse and fine direction discriminations with random-dot stimuli. Unbeknownst to the observers, we added subthreshold motion signals of different directions to perturb the responses of different groups of direction-selective neurons. The pattern of biases induced by subthreshold signals of different directions indicates that subjects' choice behavior relied on the activity of neurons with a wide range of preferred directions. For coarse discriminations, observers' judgments were most strongly determined by neurons tuned to the target directions, but for fine discriminations, neurons with displaced preferred directions had the largest influence. We conclude that perceptual decisions rely on a population decoding strategy that takes the statistical reliability of sensory responses into account

    Gaze and viewing angle influence visual stabilization of upright posture

    Get PDF
    Focusing gaze on a target helps stabilize upright posture. We investigated how this visual stabilization can be affected by observing a target presented under different gaze and viewing angles. In a series of 10-second trials, participants (N = 20, 29.3 ± 9 years of age) stood on a force plate and fixed their gaze on a figure presented on a screen at a distance of 1 m. The figure changed position (gaze angle: eye level (0°), 25° up or down), vertical body orientation (viewing angle: at eye level but rotated 25° as if leaning toward or away from the participant), or both (gaze and viewing angle: 25° up or down with the rotation equivalent of a natural visual perspective). Amplitude of participants’ sagittal displacement, surface area, and angular position of the center of gravity (COG) were compared. Results showed decreased COG velocity and amplitude for up and down gaze angles. Changes in viewing angles resulted in altered body alignment and increased amplitude of COG displacement. No significant changes in postural stability were observed when both gaze and viewing angles were altered. Results suggest that both the gaze angle and viewing perspective may be essential variables of the visuomotor system modulating postural responses

    Spatial contrast sensitivity in adolescents with autism spectrum disorders

    Get PDF
    Adolescents with autism spectrum disorders (ASD) and typically developing (TD) controls underwent a rigorous psychophysical assessment that measured contrast sensitivity to seven spatial frequencies (0.5-20 cycles/degree). A contrast sensitivity function (CSF) was then fitted for each participant, from which four measures were obtained: visual acuity, peak spatial frequency, peak contrast sensitivity, and contrast sensitivity at a low spatial frequency. There were no group differences on any of the four CSF measures, indicating no differential spatial frequency processing in ASD. Although it has been suggested that detail-oriented visual perception in individuals with ASD may be a result of differential sensitivities to low versus high spatial frequencies, the current study finds no evidence to support this hypothesis

    A Multi-Stage Model for Fundamental Functional Properties in Primary Visual Cortex

    Get PDF
    Many neurons in mammalian primary visual cortex have properties such as sharp tuning for contour orientation, strong selectivity for motion direction, and insensitivity to stimulus polarity, that are not shared with their sub-cortical counterparts. Successful models have been developed for a number of these properties but in one case, direction selectivity, there is no consensus about underlying mechanisms. We here define a model that accounts for many of the empirical observations concerning direction selectivity. The model describes a single column of cat primary visual cortex and comprises a series of processing stages. Each neuron in the first cortical stage receives input from a small number of on-centre and off-centre relay cells in the lateral geniculate nucleus. Consistent with recent physiological evidence, the off-centre inputs to cortex precede the on-centre inputs by a small (∼4 ms) interval, and it is this difference that confers direction selectivity on model neurons. We show that the resulting model successfully matches the following empirical data: the proportion of cells that are direction selective; tilted spatiotemporal receptive fields; phase advance in the response to a stationary contrast-reversing grating stepped across the receptive field. The model also accounts for several other fundamental properties. Receptive fields have elongated subregions, orientation selectivity is strong, and the distribution of orientation tuning bandwidth across neurons is similar to that seen in the laboratory. Finally, neurons in the first stage have properties corresponding to simple cells, and more complex-like cells emerge in later stages. The results therefore show that a simple feed-forward model can account for a number of the fundamental properties of primary visual cortex

    Combining Feature Selection and Integration—A Neural Model for MT Motion Selectivity

    Get PDF
    Background: The computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features. Methodology/Principal Findings: Recent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem. Conclusions/Significance: We propose a new neural model for MT pattern computation and motion disambiguation that i

    Object Segmentation from Motion Discontinuities and Temporal Occlusions–A Biologically Inspired Model

    Get PDF
    BACKGROUND: Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. METHODOLOGY/PRINCIPAL FINDINGS: From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. CONCLUSIONS/SIGNIFICANCE: A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion

    A New Perceptual Bias Reveals Suboptimal Population Decoding of Sensory Responses

    Get PDF
    Several studies have reported optimal population decoding of sensory responses in two-alternative visual discrimination tasks. Such decoding involves integrating noisy neural responses into a more reliable representation of the likelihood that the stimuli under consideration evoked the observed responses. Importantly, an ideal observer must be able to evaluate likelihood with high precision and only consider the likelihood of the two relevant stimuli involved in the discrimination task. We report a new perceptual bias suggesting that observers read out the likelihood representation with remarkably low precision when discriminating grating spatial frequencies. Using spectrally filtered noise, we induced an asymmetry in the likelihood function of spatial frequency. This manipulation mainly affects the likelihood of spatial frequencies that are irrelevant to the task at hand. Nevertheless, we find a significant shift in perceived grating frequency, indicating that observers evaluate likelihoods of a broad range of irrelevant frequencies and discard prior knowledge of stimulus alternatives when performing two-alternative discrimination
    corecore