364 research outputs found

    Filling-in the Forms: Surface and Boundary Interactions in Visual Cortex

    Full text link
    Defense Advanced Research Projects Agency and the Office of Naval Research (NOOOI4-95-l-0409); Office of Naval Research (NOOO14-95-1-0657)

    Laminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perception

    Full text link
    How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.Air Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-02-35398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624

    A topological solution to object segmentation and tracking

    Full text link
    The world is composed of objects, the ground, and the sky. Visual perception of objects requires solving two fundamental challenges: segmenting visual input into discrete units, and tracking identities of these units despite appearance changes due to object deformation, changing perspective, and dynamic occlusion. Current computer vision approaches to segmentation and tracking that approach human performance all require learning, raising the question: can objects be segmented and tracked without learning? Here, we show that the mathematical structure of light rays reflected from environment surfaces yields a natural representation of persistent surfaces, and this surface representation provides a solution to both the segmentation and tracking problems. We describe how to generate this surface representation from continuous visual input, and demonstrate that our approach can segment and invariantly track objects in cluttered synthetic video despite severe appearance changes, without requiring learning.Comment: 21 pages, 6 main figures, 3 supplemental figures, and supplementary material containing mathematical proof

    From Computational Theory to Psychology and Neurophysiology -- a case study from vision

    Get PDF
    This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643.The CNS needs to be understood at four nearly independent levels of description: (1) that at which the nature of a computation is expressed; (2) that at which the algorithms that implement a computation are characterised; (3) that at which an algorithm is committed to particular mechanisms; and (4) that at which the mechanisms are realised in hardware. In general, the nature of a computation is determined by the problem to be solved, the mechanisms that are used depend upon the available hardware, and the particular algorithms chosen depend on the problem and on the available mechanisms. Examples are given of theories at each level from current research in vision, and a brief review of the immediate prospects for the field is given.MIT Artificial Intelligence Laboratory Department of Defense Advanced Research Projects Agenc

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Acetylcholine neuromodulation in normal and abnormal learning and memory: vigilance control in waking, sleep, autism, amnesia, and Alzheimer's disease

    Get PDF
    This article provides a unified mechanistic neural explanation of how learning, recognition, and cognition break down during Alzheimer's disease, medial temporal amnesia, and autism. It also clarifies whey there are often sleep disturbances during these disorders. A key mechanism is how acetylcholine modules vigilance control in cortical layer

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    How does binocular rivalry emerge from cortical mechanisms of 3-D vision?

    Get PDF
    AbstractUnder natural viewing conditions, a single depthful percept of the world is consciously seen. When dissimilar images are presented to corresponding regions of the two eyes, binocular rivalry may occur, during which the brain consciously perceives alternating percepts through time. How do the same brain mechanisms that generate a single depthful percept of the world also cause perceptual bistability, notably binocular rivalry? What properties of brain representations correspond to consciously seen percepts? A laminar cortical model of how cortical areas V1, V2, and V4 generate depthful percepts is developed to explain and quantitatively simulate binocular rivalry data. The model proposes how mechanisms of cortical development, perceptual grouping, and figure-ground perception lead to single and rivalrous percepts. Quantitative model simulations of perceptual grouping circuits demonstrate influences of contrast changes that are synchronized with switches in the dominant eye percept, gamma distribution of dominant phase durations, piecemeal percepts, and coexistence of eye-based and stimulus-based rivalry. The model as a whole also qualitatively explains data about the involvement of multiple brain regions in rivalry, the effects of object attention on switching between superimposed transparent surfaces, monocular rivalry, Marroquin patterns, the spread of suppression during binocular rivalry, binocular summation, fusion of dichoptically presented orthogonal gratings, general suppression during binocular rivalry, and pattern rivalry. These data explanations follow from model brain mechanisms that assure non-rivalrous conscious percepts

    Temporal Dynamics of Binocular Display Processing with Corticogeniculate Interactions

    Full text link
    A neural model of binocular vision is developed to simulate psychophysical and neurobiological data concerning the dynamics of binocular disparity processing. The model shows how feedforward and feedback interactions among LGN ON and OFF cells and cortical simple, complex, and hypercomplex cells can simulate binocular summation, the Pulfrich effect, and the fusion of delayed anticorrelated stereograms. Model retinal ON and OFF cells are linked by an opponent process capable of generating antagonistic rebounds from OFF cells after offset of an ON cell input. Spatially displaced ON and OFF cells excite simple cells. Opposite polarity simple cells compete before their half-wave rectified outputs excite complex cells. Complex cells binocularly match like-polarity simple cell outputs before pooling half-wave rectified signals frorn opposite polarities. Competitive feedback among complex cells leads to sharpening of disparity selectivity and normalizes cell activity. Slow inhibitory interneurons help to reset complex cells after input offset. The Pulfrich effect occurs because the delayed input from the one eye fuses with the present input from the other eye to create a disparity. Binocular summation occurs for stimuli of brief duration or of low contrast because competitive normalization takes time, and cannot occur for very brief or weak stimuli. At brief SOAs, anticorrelatecd stereograms can be fused because the rebound mechanism ensures that the present image to one eye can fuse with the afterimage from a previous image to the other eye. Corticogeniculate feedback embodies a matching process that enhances the speed and temporal accuracy of complex cell disparity tuning. Model mechanisms interact to control the stable development of sharp disparity tuning.Air Force Office of Scientific Research (F19620-92-J-0499, F49620-92-J-0334, F49620-92-J-0225); Office of Naval Research (N00014-95-1-0409, N00014-95-l-0657, N00014-92-J-1015, N00014-91-J-4100
    corecore