253 research outputs found

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A Neural Model of Motion Processing and Visual Navigation by Cortical Area MST

    Full text link
    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually-guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals, and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves, and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.Defense Research Projects Agency (N00014-92-J-4015); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0409, N00014-95-1-0657, N00014-91-J-4100, N0014-94-I-0597); Air Force Office of Scientific Research (F49620-92-J-0334)

    Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance

    Full text link
    Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Psychophysical evidence for a radial motion bias in complex motion discrimination

    Get PDF
    AbstractIn a graded motion pattern task we measured observers’ ability to discriminate small changes in the global direction of complex motion patterns. Performance varied systematically as a function of the test motion (radial, circular, or spiral) with thresholds for radial motions significantly lower than for circular motions. Thresholds for spiral motions were intermediate. In all cases thresholds were lower than for direction discrimination using planar motions and increased with removal of the radial speed gradient, consistent with the use of motion pattern specific mechanisms that integrate motion along complex trajectories. The radial motion bias and preference for speed gradients observed here is similar to the preference for expanding motions and speed gradients reported in cortical area MSTd, and may suggest the presence of comparable neural mechanisms in the human visual motion system

    Neural models of inter-cortical networks in the primate visual system for navigation, attention, path perception, and static and kinetic figure-ground perception

    Full text link
    Vision provides the primary means by which many animals distinguish foreground objects from their background and coordinate locomotion through complex environments. The present thesis focuses on mechanisms within the visual system that afford figure-ground segregation and self-motion perception. These processes are modeled as emergent outcomes of dynamical interactions among neural populations in several brain areas. This dissertation specifies and simulates how border-ownership signals emerge in cortex, and how the medial superior temporal area (MSTd) represents path of travel and heading, in the presence of independently moving objects (IMOs). Neurons in visual cortex that signal border-ownership, the perception that a border belongs to a figure and not its background, have been identified but the underlying mechanisms have been unclear. A model is presented that demonstrates that inter-areal interactions across model visual areas V1-V2-V4 afford border-ownership signals similar to those reported in electrophysiology for visual displays containing figures defined by luminance contrast. Competition between model neurons with different receptive field sizes is crucial for reconciling the occlusion of one object by another. The model is extended to determine border-ownership when object borders are kinetically-defined, and to detect the location and size of shapes, despite the curvature of their boundary contours. Navigation in the real world requires humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature. In primates, MSTd has been implicated in heading perception. A model of V1, medial temporal area (MT), and MSTd is developed herein that demonstrates how MSTd neurons can simultaneously encode path curvature and heading. Human judgments of heading are accurate in rigid environments, but are biased in the presence of IMOs. The model presented here explains the bias through recurrent connectivity in MSTd and avoids the use of differential motion detectors which, although used in existing models to discount the motion of an IMO relative to its background, is not biologically plausible. Reported modulation of the MSTd population due to attention is explained through competitive dynamics between subpopulations responding to bottom-up and top- down signals

    Eye velocity gain fields for visuo- motor coordinate transformations

    Get PDF
    ’Gain-field-like’ tuning behavior is characterized by a modulation of the neuronal response depending on a certain variable, without changing the actual receptive field characteristics in relation to another variable. Eye position gain fields were first observed in area 7a of the posterior parietal cortex (PPC), where visually responsive neurons are modulated by ocular position. Analysis of artificial neural networks has shown that this type of tuning function might comprise the neuronal substrate for coordinate transformations. In this work, neuronal activity in the dorsal medial superior temporal area (MSTd) has been analyzed with an focus on it’s involvement in oculomotor control. MSTd is part of the extrastriate visual cortex and located in the PPC. Lesion studies suggested a participation of this cortical area in the control of eye movements. Inactivation of MSTd severely impairs the optokinetic response (OKR), which is an reflex-like kind of eye movement that compensates for motion of the whole visual scene. Using a novel, information-theory based approach for neuronal data analysis, we were able to identify those visual and eye movement related signals which were most correlated to the mean rate of spiking activity in MSTd neurons during optokinetic stimulation. In a majority of neurons firing rate was non-linearly related to a combination of retinal image velocity and eye velocity. The observed neuronal latency relative to these signals is in line with a system-level model of OKR, where an efference copy of the motor command signal is used to generate an internal estimate of the head-centered stimulus velocity signal. Tuning functions were obtained by using a probabilistic approach. In most MSTd neurons these functions exhibited gain-field-like shapes, with eye velocity modulating the visual response in a multiplicative manner. Population analysis revealed a large diversity of tuning forms including asymmetric and non-separable functions. The distribution of gain fields was almost identical to the predictions from a neural network model trained to perform the summation of image and eye velocity. These findings therefore strongly support the hypothesis of MSTd’s participation in the OKR control system by implementing the transformation from retinal image velocity to an estimate of stimulus velocity. In this sense, eye velocity gain fields constitute an intermediate step in transforming the eye-centered to a head-centered visual motion signal.Another aspect that was addressed in this work was the comparison of the irregularity of MSTd spiking activity during optokinetic response with the behavior during pure visual stimulation. The goal of this study was an evaluation of potential neuronal mechanisms underlying the observed gain field behavior. We found that both inter- and intra-trial variability were decreased with increasing retinal image velocity, but increased with eye velocity. This observation argues against a symmetrical integration of driving and modulating inputs. Instead, we propose an architecture where multiplicative gain modulation is achieved by simultaneous increase of excitatory and inhibitory background synaptic input. A conductance-based single-compartment model neuron was able to reproduce realistic gain modulation and the observed stimulus-dependence of neural variability, at the same time. In summary, this work leads to improved knowledge about MSTd’s role in visuomotor transformation by analyzing various functional and mechanistic aspects of eye velocity gain fields on a systems-, network-, and neuronal level
    • …
    corecore