605,170 research outputs found

    Functional correlates of optic flow motion processing in Parkinson’s disease

    Get PDF
    The visual input created by the relative motion between an individual and the environment, also called optic flow, influences the sense of self-motion, postural orientation, veering of gait, and visuospatial cognition. An optic flow network comprising visual motion areas V6, V3A, and MT+, as well as visuo-vestibular areas including posterior insula vestibular cortex (PIVC) and cingulate sulcus visual area (CSv), has been described as uniquely selective for parsing egomotion depth cues in humans. Individuals with Parkinson’s disease (PD) have known behavioral deficits in optic flow perception and visuospatial cognition compared to age- and education-matched control adults (MC). The present study used functional magnetic resonance imaging (fMRI) to investigate neural correlates related to impaired optic flow perception in PD. We conducted fMRI on 40 non-demented participants (23 PD and 17 MC) during passive viewing of simulated optic flow motion and random motion. We hypothesized that compared to the MC group, PD participants would show abnormal neural activity in regions comprising this optic flow network. MC participants showed robust activation across all regions in the optic flow network, consistent with studies in young adults, suggesting intact optic flow perception at the neural level in healthy aging. PD participants showed diminished activity compared to MC particularly within visual motion area MT+ and the visuo-vestibular region CSv. Further, activation in visuo-vestibular region CSv was associated with disease severity. These findings suggest that behavioral reports of impaired optic flow perception and visuospatial performance may be a result of impaired neural processing within visual motion and visuo-vestibular regions in PD.Published versio

    Visual and eye movement functions of the posterior parietal cortex

    Get PDF
    Lesions of the posterior parietal area in humans produce interesting spatial-perceptual and spatial-behavioral deficits. Among the more important deficits observed are loss of spatial memories, problems representing spatial relations in models or drawings, disturbances in the spatial distribution of attention, and the inability to localize visual targets. Posterior parietal lesions in nonhuman primates also produce visual spatial deficits not unlike those found in humans. Mountcastle and his colleagues were the first to explore this area, using single cell recording techniques in behaving monkeys over 13 years ago. Subsequent work by Mountcastle, Lynch and colleagues, Hyvarinen and colleagues, Robinson, Goldberg & Stanton, and Sakata and colleagues during the period of the late 1970s and early 1980s provided an informational and conceptual foundation for exploration of this fascinating area of the brain. Four new directions of research that are presently being explored from this foundation are reviewed in this article. 1. The anatomical and functional organization of the inferior parietal lobule is presently being investigated with neuroanatomical tracing and single cell recording techniques. This area is now known to be comprised of at least four separate cortical fields. 2. Neural mechanisms for spatial constancy are being explored. In area 7a information about eye position is found to be integrated with visual inputs to produce representations of visual space that are head-centered (the meaning of a head-centered coordinate system is explained on p. 13). 3. The role of the posterior parietal cortex, and the pathways projecting into this region, in processing information about motion in the visual world is under investigation. Visual areas within the posterior parietal cortex may play a role in extracting higher level motion information including the perception of structure-from-motion. 4. A previously unexplored area within the intraparietal sulcus has been found whose cells hold a representation in memory of planned eye movements. Special experimental protocols have shown that these cells code the direction and amplitude of intended movements in motor coordinates and suggest that this area plays a role in motor planning

    Computing motion in the primate's visual system

    Get PDF
    Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle ‘the final optical flow should be as smooth as possible’ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory

    Computing optical flow in the primate visual system

    Get PDF
    Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We show how gradient models, a well-known class of motion algorithms, can be implemented within the magnocellular pathway of the primate's visual system. Our cooperative algorithm computes optical flow in two steps. In the first stage, assumed to be located in primary visual cortex, local motion is measured while spatial integration occurs in the second stage, assumed to be located in the middle temporal area (MT). The final optical flow is extracted in this second stage using population coding, such that the velocity is represented by the vector sum of neurons coding for motion in different directions. Our theory, relating the single-cell to the perceptual level, accounts for a number of psychophysical and electrophysiological observations and illusions

    Neural coding of naturalistic motion stimuli

    Full text link
    We study a wide field motion sensitive neuron in the visual system of the blowfly {\em Calliphora vicina}. By rotating the fly on a stepper motor outside in a wooded area, and along an angular motion trajectory representative of natural flight, we stimulate the fly's visual system with input that approaches the natural situation. The neural response is analyzed in the framework of information theory, using methods that are free from assumptions. We demonstrate that information about the motion trajectory increases as the light level increases over a natural range. This indicates that the fly's brain utilizes the increase in photon flux to extract more information from the photoreceptor array, suggesting that imprecision in neural signals is dominated by photon shot noise in the physical input, rather than by noise generated within the nervous system itself.Comment: 15 pages, 4 figure

    Multiscale sampling model for motion integration

    Full text link
    Biologically plausible strategies for visual scene integration across spatial and temporal domains continues to be a challenging topic. The fundamental question we address is whether classical problems in motion integration, such as the aperture problem, can be solved in a model that samples the visual scene at multiple spatial and temporal scales in parallel. We hypothesize that fast interareal connections that allow feedback of information between cortical layers are the key processes that disambiguate motion direction. We developed a neural model showing how the aperture problem can be solved using different spatial sampling scales between LGN, V1 layer 4, V1 layer 6, and area MT. Our results suggest that multiscale sampling, rather than feedback explicitly, is the key process that gives rise to end-stopped cells in V1 and enables area MT to solve the aperture problem without the need for calculating intersecting constraints or crafting intricate patterns of spatiotemporal receptive fields. Furthermore, the model explains why end-stopped cells no longer emerge in the absence of V1 layer 6 activity (Bolz & Gilbert, 1986), why V1 layer 4 cells are significantly more end-stopped than V1 layer 6 cells (Pack, Livingstone, Duffy, & Born, 2003), and how it is possible to have a solution to the aperture problem in area MT with no solution in V1 in the presence of driving feedback. In summary, while much research in the field focuses on how a laminar architecture can give rise to complicated spatiotemporal receptive fields to solve problems in the motion domain, we show that one can reframe motion integration as an emergent property of multiscale sampling achieved concurrently within lamina and across multiple visual areas.This work was supported in part by CELEST, a National Science Foundation Science of Learning Center; NSF SBE-0354378 and OMA-0835976; ONR (N00014-11-1-0535); and AFOSR (FA9550-12-1-0436). (CELEST, a National Science Foundation Science of Learning Center; SBE-0354378 - NSF; OMA-0835976 - NSF; N00014-11-1-0535 - ONR; FA9550-12-1-0436 - AFOSR)Published versio

    Laminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perception

    Full text link
    How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.Air Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-02-35398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624
    corecore