5,789 research outputs found

    Peripheral Processing Facilitates Optic Flow-Based Depth Perception

    Get PDF
    Li J, Lindemann JP, Egelhaaf M. Peripheral Processing Facilitates Optic Flow-Based Depth Perception. Frontiers in Computational Neuroscience. 2016;10(10): 111.Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements (“optic flow”) during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs). However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Perceiving Collision Impacts in Alzheimer’s Disease: The Effect of Retinal Eccentricity on Optic Flow Deficits

    Get PDF
    The present study explored whether the optic flow deficit in Alzheimer’s disease (AD) reported in the literature transfers to different types of optic flow, in particular, one that specifies collision impacts with upcoming surfaces, with a special focus on the effect of retinal eccentricity. Displays simulated observer movement over a ground plane toward obstacles lying in the observer’s path. Optical expansion was modulated by varying tau-dot. The visual field was masked either centrally (peripheral vision) or peripherally (central vision) using masks ranging from 10° to 30° in diameter in steps of 10°. Participants were asked to indicate whether their approach would result in collision or no collision with the obstacles. Results showed that AD patients’ sensitivity to tau-dot was severely compromised, not only for central vision but also for peripheral vision, compared to age- and education-matched elderly controls. The results demonstrated that AD patients’ optic flow deficit is not limited to radial optic flow but includes also the optical pattern engendered by tau-dot. Further deterioration in the capacity to extract tau-dot to determine potential collisions in conjunction with the inability to extract heading information from radial optic flow would exacerbate AD patients’ difficulties in navigation and visuospatial orientation

    Spatial vision in insects is facilitated by shaping the dynamics of visual input through behavioral action

    Get PDF
    Egelhaaf M, Boeddeker N, Kern R, Kurtz R, Lindemann JP. Spatial vision in insects is facilitated by shaping the dynamics of visual input through behavioral action. Frontiers in Neural Circuits. 2012;6:108.Insects such as flies or bees, with their miniature brains, are able to control highly aerobatic flight maneuvres and to solve spatial vision tasks, such as avoiding collisions with obstacles, landing on objects, or even localizing a previously learnt inconspicuous goal on the basis of environmental cues. With regard to solving such spatial tasks, these insects still outperform man-made autonomous flying systems. To accomplish their extraordinary performance, flies and bees have been shown by their characteristic behavioral actions to actively shape the dynamics of the image flow on their eyes ("optic flow"). The neural processing of information about the spatial layout of the environment is greatly facilitated by segregating the rotational from the translational optic flow component through a saccadic flight and gaze strategy. This active vision strategy thus enables the nervous system to solve apparently complex spatial vision tasks in a particularly efficient and parsimonious way. The key idea of this review is that biological agents, such as flies or bees, acquire at least part of their strength as autonomous systems through active interactions with their environment and not by simply processing passively gained information about the world. These agent-environment interactions lead to adaptive behavior in surroundings of a wide range of complexity. Animals with even tiny brains, such as insects, are capable of performing extraordinarily well in their behavioral contexts by making optimal use of the closed action-perception loop. Model simulations and robotic implementations show that the smart biological mechanisms of motion computation and visually-guided flight control might be helpful to find technical solutions, for example, when designing micro air vehicles carrying a miniaturized, low-weight on-board processor

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Neural mechanisms underlying specific visual tasks during self-motion

    Get PDF
    Object movement detection during observers’ self-motion is critical in navigation. Given ample optical available variables, which of them would be used would help us reveal the strategies being employed. In this work, using functional magnetic resonance imaging (fMRI) methods, we investigated the neural substrate underlying specific visual motion related tasks, such as time to passage (TTP), depth parallax, and collision. Using a visual search paradigm implemented with MATLAB, we developed a psychophysical task to investigate how the target characteristics (initial depth, initial eccentricity, and independent velocity), spatial attention, and heading estimation would affect visual search, for better understanding the mechanisms involved in object movement detection during self-motion. The fMRI analysis shows that: 1. Bilateral precentral sulcus (PreCS), postcentral sulcus (PostCS) and bilateral hMT are strongly activated during the TTP task. 2. Cortical regions along the dorsal visual processing pathway, including bilateral hMT, superior parietal gyrus (SPG), PostCS, PreCS and superior frontal gyrus (SFG), play important roles in our depth perception test. 3. In the collision test, similar activation pattern has been found in normal controls and stroke patients with visual deficits, intraparietal sulcus (IPS), SPG, supplementary area (SMA) and premotor regions are highly activated. The psychophysical results in visual search tasks indicate targets located in central visual field and target placed closer to the observer are easier to detect, looming distractor demands attention, the detrimental effect increases with the increasing of the target eccentricity level, no preference has been found in visual search among different heading directions in this test. In summary, cortical regions along visual motion processing pathway are highly involved in object movement detection during self-motion, the observers will take flexible strategies when different optical cues are provided

    Interruption of visually perceived forward motion in depth evokes a cortical activation shift from spatial to intentional motor regions

    Get PDF
    Forward locomotion generates a radially expanding flow of visual motion which supports goal-directed walking. In stationary mode, wide-field visual presentation of optic flow stimuli evokes the illusion of forward self-motion. These effects illustrate an intimate relation between visual and motor processing. In the present fMRI study, we applied optic flow to identify distinct interfaces between circuitries implicated in vision and movement. The dorsal premotor cortex (PMd) was expected to contribute to wide-field forward motion flow (FFw), reflecting a pathway for externally triggered motor control. Medial prefrontal activation was expected to follow interrupted optic flow urging internally generated action. Data of 15 healthy subjects were analyzed with Statistical Parametric Mapping and confirmed this hypothesis. Right PMd activation was seen in FFw, together with activations of posterior parietal cortex, ventral V5, and the right fusiform gyms. Conjunction analysis of the transition from wide to narrow forward flow and reversed wide-field flow revealed selective dorsal medial prefrontal activation. These findings point at equivalent visuomotor transformations in locomotion and goal-directed hand movement, in which parietal-premotor circuitry is crucially implicated. Possible implications of an activation shift from spatial to intentional motor regions for understanding freezing of gait in Parkinson's disease are discussed: impaired medial prefrontal function in Parkinson's disease may reflect an insufficient internal motor drive when visual support from optic flow is reduced at the entrance of a narrow corridor. (C) 2010 Elsevier B.V. All rights reserved

    VECTION INDUCED BY ILLUSORY MINIATURIZATION OF MOVING PICTURE

    Get PDF
    The effects of image miniaturization on visually induced self-motion perception (vection) were examined in a psychophysical experiment in which 11 observers participated. The original motion picture stimulus was filmed from a camera mounted on the front of a moving train. This kind of motion stimulus can be considered equivalent to retinal flow that we daily experience under natural visual environment, and termed as "real world stimulus". Saturation enhancementand defocused blur were applied to this original movie, as two types of miniaturization transformations. The results of psychophysical experiment revealed that the miniaturized movies can induce self-motion perception as strong as the original stimulus, although naturalness of the image experienced under the miniaturized conditions were significantly detracted. Impacts of using a real world stimulus as a future vection inducer were discussed base on the results

    Science of Facial Attractiveness

    Get PDF
    corecore