43 research outputs found

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A Neural Model of Motion Processing and Visual Navigation by Cortical Area MST

    Full text link
    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually-guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals, and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves, and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.Defense Research Projects Agency (N00014-92-J-4015); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0409, N00014-95-1-0657, N00014-91-J-4100, N0014-94-I-0597); Air Force Office of Scientific Research (F49620-92-J-0334)

    Neural models of inter-cortical networks in the primate visual system for navigation, attention, path perception, and static and kinetic figure-ground perception

    Full text link
    Vision provides the primary means by which many animals distinguish foreground objects from their background and coordinate locomotion through complex environments. The present thesis focuses on mechanisms within the visual system that afford figure-ground segregation and self-motion perception. These processes are modeled as emergent outcomes of dynamical interactions among neural populations in several brain areas. This dissertation specifies and simulates how border-ownership signals emerge in cortex, and how the medial superior temporal area (MSTd) represents path of travel and heading, in the presence of independently moving objects (IMOs). Neurons in visual cortex that signal border-ownership, the perception that a border belongs to a figure and not its background, have been identified but the underlying mechanisms have been unclear. A model is presented that demonstrates that inter-areal interactions across model visual areas V1-V2-V4 afford border-ownership signals similar to those reported in electrophysiology for visual displays containing figures defined by luminance contrast. Competition between model neurons with different receptive field sizes is crucial for reconciling the occlusion of one object by another. The model is extended to determine border-ownership when object borders are kinetically-defined, and to detect the location and size of shapes, despite the curvature of their boundary contours. Navigation in the real world requires humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature. In primates, MSTd has been implicated in heading perception. A model of V1, medial temporal area (MT), and MSTd is developed herein that demonstrates how MSTd neurons can simultaneously encode path curvature and heading. Human judgments of heading are accurate in rigid environments, but are biased in the presence of IMOs. The model presented here explains the bias through recurrent connectivity in MSTd and avoids the use of differential motion detectors which, although used in existing models to discount the motion of an IMO relative to its background, is not biologically plausible. Reported modulation of the MSTd population due to attention is explained through competitive dynamics between subpopulations responding to bottom-up and top- down signals

    Distributed Representation of Curvilinear Self-Motion in the Macaque Parietal Cortex

    Get PDF
    SummaryInformation about translations and rotations of the body is critical for complex self-motion perception during spatial navigation. However, little is known about the nature and function of their convergence in the cortex. We measured neural activity in multiple areas in the macaque parietal cortex in response to three different types of body motion applied through a motion platform: translation, rotation, and combined stimuli, i.e., curvilinear motion. We found a continuous representation of motion types in each area. In contrast to single-modality cells preferring either translation-only or rotation-only stimuli, convergent cells tend to be optimally tuned to curvilinear motion. A weighted summation model captured the data well, suggesting that translation and rotation signals are integrated subadditively in the cortex. Interestingly, variation in the activity of convergent cells parallels behavioral outputs reported in human psychophysical experiments. We conclude that representation of curvilinear self-motion perception is widely distributed in the primate sensory cortex

    Neural representation of complex motion in the primate cortex

    Get PDF
    This dissertation is concerned with how information about the environment is represented by neural activity in the primate brain. More specifically, it contains several studies that explore the representation of visual motion in the brains of humans and nonhuman primates through behavioral and physiological measures. The majority of this work is focused on the activity of individual neurons in the medial superior temporal area (MST) – a high-level, extrastriate area of the primate visual cortex. The first two studies provide an extensive review of the scientific literature on area MST. The area’s prominent role at the intersection of low-level, bottom-up, sensory processing and high-level, top-down mechanisms is highlighted. Furthermore, a specific article on how information about self-motion and object motion can be decoded from a population of MSTd neurons is reviewed in more detail. The third study describes a published and annotated dataset of MST neurons’ responses to a series of different motion stimuli. This dataset is analyzed using a variety of different analysis approaches in the fifth study. Classical tuning curve approaches confirm that MST neurons have large, but well-defined spatial receptive fields and are independently tuned for linear and spiral motion, as well as speed. We also confirm that the tuning for spiral motion is position invariant in a majority of MST neurons. A bias-free characterization of receptive field profiles based on a new stimulus that generates smooth, complex motion patterns turned out to be predictive of some of the tuning properties of MST neurons, but was generally less informative than similar approaches have been in earlier visual areas. The fifth study introduces a new motion stimulus that consists of hexgonal segments and presents an optimization algorithm for an adaptive online analysis of neurophysiological recordings. Preliminary physiological data and simulations show these tools to have a strong potential in characterizing the response functions of MST neurons. The final study describes a behavioral experiment with human subjects that explores how different stimulus features, such as size and contrast, affect motion perception and discusses what conclusions can be drawn from that about the representation of visual motion in the human brain. Together these studies highlight the visual motion processing pathway of the primate brain as an excellent model system for studying more complex relations of neural activity and external stimuli. Area MST in particular emerges as a gateway between perception, cognition, and action planning.2021-11-1

    Vestibular System and Self-Motion

    Get PDF
    Detection of the state of self-motion, such as the instantaneous heading direction, the traveled trajectory and traveled distance or time, is critical for efficient spatial navigation. Numerous psychophysical studies have indicated that the vestibular system, originating from the otolith and semicircular canals in our inner ears, provides robust signals for different aspects of self-motion perception. In addition, vestibular signals interact with other sensory signals such as visual optic flow to facilitate natural navigation. These behavioral results are consistent with recent findings in neurophysiological studies. In particular, vestibular activity in response to the translation or rotation of the head/body in darkness is revealed in a growing number of cortical regions, many of which are also sensitive to visual motion stimuli. The temporal dynamics of the vestibular activity in the central nervous system can vary widely, ranging from acceleration-dominant to velocity-dominant. Different temporal dynamic signals may be decoded by higher level areas for different functions. For example, the acceleration signals during the translation of body in the horizontal plane may be used by the brain to estimate the heading directions. Although translation and rotation signals arise from independent peripheral organs, that is, otolith and canals, respectively, they frequently converge onto single neurons in the central nervous system including both the brainstem and the cerebral cortex. The convergent neurons typically exhibit stronger responses during a combined curved motion trajectory which may serve as the neural correlate for complex path perception. During spatial navigation, traveled distance or time may be encoded by different population of neurons in multiple regions including hippocampal-entorhinal system, posterior parietal cortex, or frontal cortex

    The analysis of complex motion patterns in primate cortex

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1995.Includes bibliographical references.by Bard J. Geesaman.Ph.D

    Integration of motion and form cues for the perception of self-motion in the human brain

    Get PDF
    When moving around in the world, the human visual system uses both motion and form information to estimate the direction of self-motion (i.e., heading). However, little is known about cortical areas in charge of this task. This brain-imaging study addressed this question by using visual stimuli consisting of randomly distributed dot pairs oriented toward a locus on a screen (the form-defined focus of expansion (FoE)) but moved away from a different locus (the motion-defined FoE) to simulate observer translation. We first fixed the motion-defined FoE location and shifted the form-defined FoE location. We then made the locations of the motion- and the form-defined FoEs either congruent (at the same location in the display) or incongruent (on the opposite sides of the display). The motion- or the form-defined FoE shift was the same in the two types of stimuli but the perceived heading direction shifted for the congruent but not the incongruent stimuli. Participants (both sexes) made a task-irrelevant (contrast discrimination) judgment during scanning. Searchlight and region-of-interest based multi-voxel pattern analysis (MVPA) revealed that early visual areas V1, V2, and V3 responded to either the motion- or the form-defined FoE shift. After V3, only the dorsal areas V3a and V3B/KO responded to such shifts. Furthermore, area V3B/KO shows a highly significant higher decoding accuracy for the congruent than the incongruent stimuli. Our results provide direct evidence showing area V3B/KO does not simply respond to motion and form cues but integrate these two cues for the perception of heading. Human survival relies on accurate perception of self-motion. The visual system uses both motion (optic flow) and form cues for the perception of the direction of self-motion (heading). Although human brain areas for processing optic flow and form structure are well identified, the areas responsible for integrating these two cues for the perception of self-motion remain unknown. We conducted fMRI experiments and used MVPA analysis technique to find human brain areas that can decode the shift in heading specified by each cue alone and the two cues combined. We found that motion and form information are first processed in the early visual areas and then are likely integrated in the higher dorsal area V3B/KO for the final estimation of heading

    Making a stronger case for comparative research to investigate the behavioral and neurological bases of three-dimensional navigation

    Get PDF
    The rich diversity of avian natural history provides exciting possibilities for comparative research aimed at understanding three-dimensional navigation. We propose some hypotheses relating differences in natural history to potential behavioral and neurological adaptations possessed by contrasting bird species. This comparative approach may offer unique insights into some of the important questions raised by Jeffery et al
    corecore