2,643 research outputs found

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    Perceptual Scale Expansion: An Efficient Angular Coding Strategy For Locomotor Space

    Get PDF
    Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration

    Von Bezold assimilation effect reverses in stereoscopic conditions

    Get PDF
    Lightness contrast and lightness assimilation are opposite phenomena: in contrast, grey targets appear darker when bordering bright surfaces (inducers) rather than dark ones; in assimilation, the opposite occurs. The question is: which visual process favours the occurrence of one phenomenon over the other? Researchers provided three answers to this question. The first asserts that both phenomena are caused by peripheral processes; the second attributes their occurrence to central processes; and the third claims that contrast involves central processes, whilst assimilation involves peripheral ones. To test these hypotheses, an experiment on an IT system equipped with goggles for stereo vision was run. Observers were asked to evaluate the lightness of a grey target, and two variables were systematically manipulated: (i) the apparent distance of the inducers; and (ii) brightness of the inducers. The retinal stimulation was kept constant throughout, so that the peripheral processes remained the same. The results show that the lightness of the target depends on both variables. As the retinal stimulation was kept constant, we conclude that central mechanisms are involved in both lightness contrast and lightness assimilation

    Von Bezold assimilation effect reverses in stereoscopic conditions

    Get PDF
    Lightness contrast and lightness assimilation are opposite phenomena: in contrast, grey targets appear darker when bordering bright surfaces (inducers) rather than dark ones; in assimilation, the opposite occurs. The question is: which visual process favours the occurrence of one phenomenon over the other? Researchers provided three answers to this question. The first asserts that both phenomena are caused by peripheral processes; the second attributes their occurrence to central processes; and the third claims that contrast involves central processes, whilst assimilation involves peripheral ones. To test these hypotheses, an experiment on an IT system equipped with goggles for stereo vision was run. Observers were asked to evaluate the lightness of a grey target, and two variables were systematically manipulated: (i) the apparent distance of the inducers; and (ii) brightness of the inducers. The retinal stimulation was kept constant throughout, so that the peripheral processes remained the same. The results show that the lightness of the target depends on both variables. As the retinal stimulation was kept constant, we conclude that central mechanisms are involved in both lightness contrast and lightness assimilation

    The Intrinsic Constraint Model and Fechnerian Sensory Scaling

    Get PDF

    Do we perceive a flattened world on the monitor screen

    Get PDF
    The current model of three-dimensional perception hypothesizes that the brain integrates the depth cues in a statistically optimal fashion through a weighted linear combination with weights proportional to the reliabilities obtained for each cue in isolation (Landy, Maloney, Johnston, & Young, 1995). Even though many investigations support such theoretical framework, some recent empirical findings are at odds with this view (e.g., Domini, Caudek, & Tassinari, 2006). Failures of linear cue integration have been attributed to cue-conflict and to unmodelled cues to flatness present in computer-generated displays. We describe two cue-combination experiments designed to test the integration of stereo and motion cues, in the presence of consistent or conflicting blur and accommodation information (i.e., when flatness cues are either absent, with physical stimuli, or present, with computer-generated displays). In both conditions, we replicated the results of Domini et al. (2006): The amount of perceived depth increased as more cues were available, also producing an over-estimation of depth in some conditions. These results can be explained by the Intrinsic Constraint model, but not by linear cue combination

    Perceived Texture Segregation in Chromatic Element-Arrangement Patterns: High Intensity Interference

    Full text link
    An element-arrangement pattern is composed of two types of elements that differ in the ways in which they are arranged in different regions of the pattern. We report experiments on the perceived segregation of chromatic element-arrangement patterns composed of equal-size red and blue squares as the luminances of the surround, the interspaces, and the background (surround plus interspaces) are varied. Perceived segregation was markedly reduced by increasing the luminance of the interspaces. Unlike achromatic element-arrangement patterns composed of squares differing in lightness (Beck, Graham, & Sutter, 1991), perceived segregation did not decrease when the luminance of the interspaces was below that of the squares. Perceived segregation was approximately constant for constant ratios of interspace luminance to square luminance and increased with the contrast ratio of the squares. Perceived segregation based on edge alignment was not interfered with by high intensity interspaces. Stereoscopic cues that caused the squares composing the element arrangement pattern to be seen in front of the interspaces did not greatly improve perceived segregation. One explanation of the results is in terms of inhibitory interactions among achromatic and chromatic cortical cells tuned to spatial-frequency and orientation. Alternately, the results may be explained in terms of how the luminance of the interspaces affects the grouping of the squares for encoding surface representations. Neither explanation accounts fully for the data and both mechanisms may be involved.Air Force Office of Scientific Research (F49620-92-J-0334); Northeast Consortium for Engineering Education (A303-21-93); Office of Naval Research (N00014-91J-4100); CNPQ and NUTES/UFRJ, Brazi

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Bi-stability in perceived slant when binocular disparity and monocular perspective specify different slants.

    Get PDF
    We examined how much depth we perceive when viewing a depiction of a slanted plane in which binocular disparity and monocular perspective provide different slant information. We exposed observers to a grid stimulus in which the monocular--and binocular-specified grid orientations were varied independently across stimulus presentations. The grids were slanted about the vertical axis and observers estimated the slant relative to the frontal plane. We were particularly interested in the metrical aspects of perceived slant for a broad spectrum of possible combinations of disparity--and perspective-specified slants. We found that observers perceived only one grid orientation when the two specified orientations were similar. More interestingly, when the monocular--and binocular-specified orientations were rather different, observers experienced perceptual bi-stability (they were able to select either a perspective--or a disparity-dominated percept)

    Temporal Dynamics of Binocular Disparity Processing with Corticogeniculate Interactions

    Full text link
    A neural model is developed to probe how corticogeniculate feedback may contribute to the dynamics of binocular vision. Feedforward and feedback interactions among retinal, lateral geniculate, and cortical simple and complex cells are used to simulate psychophysical and neurobiological data concerning the dynamics of binocular disparity processing, including correct registration of disparity in response to dynamically changing stimuli, binocular summation of weak stimuli, and fusion of anticorrelated stimuli when they are delayed, but not when they are simultaneous. The model exploits dynamic rebounds between opponent ON and OFF cells that are due to imbalances in habituative transmitter gates. It shows how corticogeniculate feedback can carry out a top-down matching process that inhibits incorrect disparity response and reduces persistence of previously correct responses to dynamically changing displays.Air Force Office of scientific Research (F49620-92-J-0499, F49620-92-J-0334, F49620-92-J-0225); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409, N00014-92-J-4015); Natioanl Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-0657
    • …
    corecore