1,154 research outputs found

    Visual motion processing and human tracking behavior

    Full text link
    The accurate visual tracking of a moving object is a human fundamental skill that allows to reduce the relative slip and instability of the object's image on the retina, thus granting a stable, high-quality vision. In order to optimize tracking performance across time, a quick estimate of the object's global motion properties needs to be fed to the oculomotor system and dynamically updated. Concurrently, performance can be greatly improved in terms of latency and accuracy by taking into account predictive cues, especially under variable conditions of visibility and in presence of ambiguous retinal information. Here, we review several recent studies focusing on the integration of retinal and extra-retinal information for the control of human smooth pursuit.By dynamically probing the tracking performance with well established paradigms in the visual perception and oculomotor literature we provide the basis to test theoretical hypotheses within the framework of dynamic probabilistic inference. We will in particular present the applications of these results in light of state-of-the-art computer vision algorithms

    Computing motion in the primate's visual system

    Get PDF
    Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle ‘the final optical flow should be as smooth as possible’ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory

    The role of terminators and occlusion cues in motion integration and segmentation: a neural network model

    Get PDF
    The perceptual interaction of terminators and occlusion cues with the functional processes of motion integration and segmentation is examined using a computational model. Inte-gration is necessary to overcome noise and the inherent ambiguity in locally measured motion direction (the aperture problem). Segmentation is required to detect the presence of motion discontinuities and to prevent spurious integration of motion signals between objects with different trajectories. Terminators are used for motion disambiguation, while occlusion cues are used to suppress motion noise at points where objects intersect. The model illustrates how competitive and cooperative interactions among cells carrying out these functions can account for a number of perceptual effects, including the chopsticks illusion and the occluded diamond illusion. Possible links to the neurophysiology of the middle temporal visual area (MT) are suggested

    Object Rigidity: Competition and cooperation between motion-energy and feature-tracking mechanisms and shape-based priors

    Full text link
    Why do moving objects appear rigid when projected retinal images are deformed nonrigidly? We used rotating rigid objects that can appear rigid or non-rigid to test whether shape features contribute to rigidity perception. When two circular rings were rigidly linked at an angle and jointly rotated at moderate speeds, observers reported that the rings wobbled and were not linked rigidly but rigid rotation was reported at slow speeds. When gaps, paint or vertices were added, the rings appeared rigidly rotating even at moderate speeds. At high speeds, all configurations appeared non-rigid. Salient features thus contribute to rigidity at slow and moderate speeds, but not at high speeds. Simulated responses of arrays of motion-energy cells showed that motion flow vectors are predominantly orthogonal to the contours of the rings, not parallel to the rotation direction. A convolutional neural network trained to distinguish flow patterns for wobbling versus rotation, gave a high probability of wobbling for the motion-energy flows. However, the CNN gave high probabilities of rotation for motion flows generated by tracking features with arrays of MT pattern-motion cells and corner detectors. In addition, circular rings can appear to spin and roll despite the absence of any sensory evidence, and this illusion is prevented by vertices, gaps, and painted segments, showing the effects of rotational symmetry and shape. Combining CNN outputs that give greater weight to motion energy at fast speeds and to feature tracking at slow, with the shape-based priors for wobbling and rolling, explained rigid and nonrigid percepts across shapes and speeds (R2=0.95). The results demonstrate how cooperation and competition between different neuronal classes lead to specific states of visual perception and to transitions between the states.Comment: 36 pages, 11 figures (10 main figures and 1 appendix figure

    Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices.

    Get PDF
    Computational theories propose that attention modulates the topographical landscape of spatial 'priority' maps in regions of the visual cortex so that the location of an important object is associated with higher activation levels. Although studies of single-unit recordings have demonstrated attention-related increases in the gain of neural responses and changes in the size of spatial receptive fields, the net effect of these modulations on the topography of region-level priority maps has not been investigated. Here we used functional magnetic resonance imaging and a multivariate encoding model to reconstruct spatial representations of attended and ignored stimuli using activation patterns across entire visual areas. These reconstructed spatial representations reveal the influence of attention on the amplitude and size of stimulus representations within putative priority maps across the visual hierarchy. Our results suggest that attention increases the amplitude of stimulus representations in these spatial maps, particularly in higher visual areas, but does not substantively change their size

    HIERARCHICAL NEURAL COMPUTATION IN THE MAMMALIAN VISUAL SYSTEM

    Get PDF
    Our visual system can efficiently extract behaviorally relevant information from ambiguous and noisy luminance patterns. Although we know much about the anatomy and physiology of the visual system, it remains obscure how the computation performed by individual visual neurons is constructed from the neural circuits. In this thesis, I designed novel statistical modeling approaches to study hierarchical neural computation, using electrophysiological recordings from several stages of the mammalian visual system. In Chapter 2, I describe a two-stage nonlinear model that characterized both synaptic current and spike response of retinal ganglion cells with unprecedented accuracy. I found that excitatory synaptic currents to ganglion cells are well described by excitatory inputs multiplied by divisive suppression, and that spike responses can be explained with the addition of a second stage of spiking nonlinearity and refractoriness. The structure of the model was inspired by known elements of the retinal circuit, and implies that presynaptic inhibition from amacrine cells is an important mechanism underlying ganglion cell computation. In Chapter 3, I describe a hierarchical stimulus-processing model of MT neurons in the context of a naturalistic optic flow stimulus. The model incorporates relevant nonlinear properties of upstream V1 processing and explained MT neuron responses to complex motion stimuli. MT neuron responses are shown to be best predicted from distinct excitatory and suppressive components. The direction-selective suppression can impart selectivity of MT neurons to complex velocity fields, and contribute to improved estimation of the three-dimensional velocity of moving objects. In Chapter 4, I present an extended model of MT neurons that includes both the stimulus-processing component and network activity reflected in local field potentials (LFPs). A significant fraction of the trial-to-trial variability of MT neuron responses is predictable from the LFPs in both passive fixation and a motion discrimination task. Moreover, the choice-related variability of MT neuron responses can be explained by their phase preferences in low-frequency band LFPs. These results suggest an important role of network activity in cortical function. Together, these results demonstrated that it is possible to infer the nature of neural computation from physiological recordings using statistical modeling approaches

    Sensory-to-motor processing of the ocular-following response.

    Get PDF
    Abstract The ocular-following response is a slow tracking eye movement that is elicited by sudden drifting movements of a large-field visual stimulus in primates. It helps to stabilize the eyes on the visual scene. Previous single unit recordings and chemical lesion studies have reported that the ocular-following response is mediated by a pathway that includes the medial superior temporal (MST) area of the cortex and the ventral paraflocculus (VPFL) of the cerebellum. Using a linear regression model, we systematically analyzed the quantitative relationships between the complex temporal patterns of neural activity at each level with sensory input and motor output signals (acceleration, velocity, and position) during ocular-following. The results revealed the following: (1) the temporal firing pattern of the MST neurons locally encodes the dynamic properties of the visual stimulus within a limited range. On the other hand, (2) the temporal firing pattern of the Purkinje cells in the cerebellum globally encodes almost the entire motor command for the ocular-following response. We conclude that the cerebellum is the major site of the sensory-to-motor transformation necessary for ocular-following, where population coding is integrated into rate coding.

    Stronger Neural Modulation by Visual Motion Intensity in Autism Spectrum Disorders

    Get PDF
    Theories of autism spectrum disorders (ASD) have focused on altered perceptual integration of sensory features as a possible core deficit. Yet, there is little understanding of the neuronal processing of elementary sensory features in ASD. For typically developed individuals, we previously established a direct link between frequency-specific neural activity and the intensity of a specific sensory feature: Gamma-band activity in the visual cortex increased approximately linearly with the strength of visual motion. Using magnetoencephalography (MEG), we investigated whether in individuals with ASD neural activity reflect the coherence, and thus intensity, of visual motion in a similar fashion. Thirteen adult participants with ASD and 14 control participants performed a motion direction discrimination task with increasing levels of motion coherence. A polynomial regression analysis revealed that gamma-band power increased significantly stronger with motion coherence in ASD compared to controls, suggesting excessive visual activation with increasing stimulus intensity originating from motion-responsive visual areas V3, V6 and hMT/V5. Enhanced neural responses with increasing stimulus intensity suggest an enhanced response gain in ASD. Response gain is controlled by excitatory-inhibitory interactions, which also drive high-frequency oscillations in the gamma-band. Thus, our data suggest that a disturbed excitatoryinhibitory balance underlies enhanced neural responses to coherent motion in ASD
    • …
    corecore