61,979 research outputs found

    Human Heading Perception Based on Form and Motion Combination

    Get PDF
    International audienceThis paper presents a study on human perception of the heading on the base of motion and form visual cues integration. The authors examine how human age influences this process. Because the visual stimuli are in general uncertain, or in some cases even conflicting, the process of combination is estimated on the base on the well known Normalized Conjunctive Consensus fusion rule, as well as on the base of the more efficient Dezert-Smarandache Theory (DSmT) of plausible and paradoxical reasoning, and more precisely on the probabilistic Proportional Conflict Redistribution rule no.5 defined within it. The main goal is focused on how these fusion rules succeed to model consistent and adequate predictions about both individuals' behavior, and age-contingent groups of individuals

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Perception and reconstruction of two-dimensional, simulated ego-motion trajectories from optic flow.

    Get PDF
    A veridical percept of ego-motion is normally derived from a combination of visual, vestibular, and proprioceptive signals. In a previous study, blindfolded subjects could accurately perceive passively travelled straight or curved trajectories provided that the orientation of the head remained constant along the trajectory. When they were turned (whole-body, head-fixed) relative to the trajectory, errors occurred. We ask here whether vision allows for better path perception in similar tasks, to correct or complement vestibular perception. Seated, stationary subjects wore a head mounted display showing optic flow stimuli which simulated linear or curvilinear 2D trajectories over a horizontal ground plane. The observer's orientation was either fixed in space, fixed relative to the path, or changed relative to both. After presentation, subjects reproduced the perceived movement with a model vehicle, of which position and orientation were recorded. They tended to correctly perceive ego-rotation (yaw), but they perceived orientation as fixed relative to trajectory or (unlike in the vestibular study) to space. This caused trajectory misperception when body rotation was wrongly attributed to a rotation of the path. Visual perception was very similar to vestibular perception

    A Neural Model of Motion Processing and Visual Navigation by Cortical Area MST

    Full text link
    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually-guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals, and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves, and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.Defense Research Projects Agency (N00014-92-J-4015); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0409, N00014-95-1-0657, N00014-91-J-4100, N0014-94-I-0597); Air Force Office of Scientific Research (F49620-92-J-0334)

    Visuo-vestibular interaction in the reconstruction of travelled trajectories

    Get PDF
    We recently published a study of the reconstruction of passively travelled trajectories from optic flow. Perception was prone to illusions in a number of conditions, and not always veridical in the others. Part of the illusionary reconstructed trajectories could be explained by assuming that subjects base their reconstruction on the ego-motion percept built during the stimulus' initial moments . In the current paper, we test this hypothesis using a novel paradigm: if the final reconstruction is governed by the initial percept, providing additional, extra-retinal information that modifies the initial percept should predictably alter the final reconstruction. The extra-retinal stimulus was tuned to supplement the information that was under-represented or ambiguous in the optic flow: the subjects were physically displaced or rotated at the onset of the visual stimulus. A highly asymmetric velocity profile (high acceleration, very low deceleration) was used. Subjects were required to guide an input device (in the form of a model vehicle; we measured position and orientation) along the perceived trajectory. We show for the first time that a vestibular stimulus of short duration can influence the perception of a much longer lasting visual stimulus. Perception of the ego-motion translation component in the visual stimulus was improved by a linear physical displacement: perception of the ego-motion rotation component by a physical rotation. This led to a more veridical reconstruction in some conditions, but to a less veridical reconstruction in other conditions

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Optic flow based perception of two-dimensional trajectories and the effects of a single landmark.

    Get PDF
    It is well established that human observers can detect their heading direction on a very short time scale on the basis of optic flow (500ms; Hooge et al., 2000). Can they also integrate these perceptions over time to reconstruct a 2D trajectory simulated by the optic flow stimulus? We investigated the visual perception and reconstruction of passively travelled two-dimensional trajectories from optic flow with and without a single landmark. Stimuli in which translation and yaw are unyoked can give rise to illusory percepts; using a structured visual environment instead of only dots can improve perception of these stimuli. Does the additional visual and/or extra-retinal information provided by a single landmark have a similar, beneficial effect? Here, seated, stationary subjects wore a head-mounted display showing optic flow stimuli that simulated various manoeuvres: linear or curvilinear 2D trajectories over a horizontal ground plane. The simulated orientation was either fixed in space, fixed relative to the path, or changed relative to both. Afterwards, subjects reproduced the perceived manoeuvre with a model vehicle, of which we recorded position and orientation. Yaw was perceived correctly. Perception of the travelled path was less accurate, but still good when the simulated orientation was fixed in space or relative to the trajectory. When the amount of yaw was not equal to the rotation of the path, or in the opposite direction, subjects still perceived orientation as fixed relative to the trajectory. This caused trajectory misperception because yaw was wrongly attributed to a rotation of the path. A single landmark could improve perception

    Interactions between motion and form processing in the human visual system

    Get PDF
    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS
    corecore