1,457 research outputs found

    Selective rendering for efficient ray traced stereoscopic images

    Get PDF
    Depth-related visual effects are a key feature of many virtual environments. In stereo-based systems, the depth effect can be produced by delivering frames of disparate image pairs, while in monocular environments, the viewer has to extract this depth information from a single image by examining details such as perspective and shadows. This paper investigates via a number of psychophysical experiments, whether we can reduce computational effort and still achieve perceptually high-quality rendering for stereo imagery. We examined selectively rendering the image pairs by exploiting the fusing capability and depth perception underlying human stereo vision. In ray-tracing-based global illumination systems, a higher image resolution introduces more computation to the rendering process since many more rays need to be traced. We first investigated whether we could utilise the human binocular fusing ability and significantly reduce the resolution of one of the image pairs and yet retain a high perceptual quality under stereo viewing condition. Secondly, we evaluated subjects' performance on a specific visual task that required accurate depth perception. We found that subjects required far fewer rendered depth cues in the stereo viewing environment to perform the task well. Avoiding rendering these detailed cues saved significant computational time. In fact it was possible to achieve a better task performance in the stereo viewing condition at a combined rendering time for the image pairs less than that required for the single monocular image. The outcome of this study suggests that we can produce more efficient stereo images for depth-related visual tasks by selective rendering and exploiting inherent features of human stereo vision

    From Stereogram to Surface: How the Brain Sees the World in Depth

    Full text link
    When we look at a scene, how do we consciously see surfaces infused with lightness and color at the correct depths? Random Dot Stereograms (RDS) probe how binocular disparity between the two eyes can generate such conscious surface percepts. Dense RDS do so despite the fact that they include multiple false binocular matches. Sparse stereograms do so even across large contrast-free regions with no binocular matches. Stereograms that define occluding and occluded surfaces lead to surface percepts wherein partially occluded textured surfaces are completed behind occluding textured surfaces at a spatial scale much larger than that of the texture elements themselves. Earlier models suggest how the brain detects binocular disparity, but not how RDS generate conscious percepts of 3D surfaces. A neural model predicts how the layered circuits of visual cortex generate these 3D surface percepts using interactions between visual boundary and surface representations that obey complementary computational rules.Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (EIA-01-30851, SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Bi-stability in perceived slant when binocular disparity and monocular perspective specify different slants.

    Get PDF
    We examined how much depth we perceive when viewing a depiction of a slanted plane in which binocular disparity and monocular perspective provide different slant information. We exposed observers to a grid stimulus in which the monocular--and binocular-specified grid orientations were varied independently across stimulus presentations. The grids were slanted about the vertical axis and observers estimated the slant relative to the frontal plane. We were particularly interested in the metrical aspects of perceived slant for a broad spectrum of possible combinations of disparity--and perspective-specified slants. We found that observers perceived only one grid orientation when the two specified orientations were similar. More interestingly, when the monocular--and binocular-specified orientations were rather different, observers experienced perceptual bi-stability (they were able to select either a perspective--or a disparity-dominated percept)

    The perception of three-dimensionality across continuous surfaces

    Get PDF
    The apparent three-dimensionality of a viewed surface presumably corresponds to several internal preceptual quantities, such as surface curvature, local surface orientation, and depth. These quantities are mathematically related for points within the silhouette bounds of a smooth, continuous surface. For instance, surface curvature is related to the rate of change of local surface orientation, and surface orientation is related to the local gradient of distance. It is not clear to what extent these 3D quantities are determined directly from image information rather than indirectly from mathematically related forms, by differentiation or by integration within boundary constraints. An open empirical question, for example, is to what extent surface curvature is perceived directly, and to what extent it is quantitative rather than qualitative. In addition to surface orientation and curvature, one derives an impression of depth, i.e., variations in apparent egocentric distance. A static orthographic image is essentially devoid of depth information, and any quantitative depth impression must be inferred from surface orientation and other sources. Such conversion of orientation to depth does appear to occur, and even to prevail over stereoscopic depth information under some circumstances

    Perceptual learning without feedback and the stability of stereoscopic slant estimation

    Get PDF
    Subjects were examined for practice effects in a stereoscopic slant estimation task involving surfaces that comprised a large portion of the visual field. In most subjects slant estimation was significantly affected by practice, but only when an isolated surface (an absolute disparity gradient) was present in the visual field. When a second, unslanted, surface was visible (providing a second disparity gradient and thereby also a relative disparity gradient) none of the subjects exhibited practice effects. Apparently, stereoscopic slant estimation is more robust or stable over time in the presence of a second surface than in its absence. In order to relate the practice effects, which occurred without feedback, to perceptual learning, results are interpreted within a cue interaction framework. In this paradigm the contribution of a cue depends on its reliability. It is suggested that normally absolute disparity gradients contribute relatively little to perceived slant and that subjects learn to increase this contribution by utilizing proprioceptive information. It is argued that---given the limited computational power of the brain---a relatively small contribution of absolute disparity gradients in perceived slant enhances the stability of stereoscopic slant perception

    Looming motion primes the visuomotor system.

    Get PDF
    A wealth of evidence now shows that human and animal observers display greater sensitivity to objects that move toward them than to objects that remain static or move away. Increased sensitivity in humans is often evidenced by reaction times that increase in rank order from looming, to receding, to static targets. However, it is not clear whether the processing advantage enjoyed by looming motion is mediated by the attention system or the motor system. The present study investigated this by first examining whether sensitivity is to looming motion per se or to certain monocular or binocular cues that constitute stereoscopic motion in depth. None of the cues accounted for the looming advantage. A perceptual measure was then used to examine performance with minimal involvement of the motor system. Results showed that looming and receding motion were equivalent in attracting attention, suggesting that the looming advantage is indeed mediated by the motor system. These findings suggest that although motion itself is sufficient for attentional capture, motion direction can prime motor responses. © 2013 American Psychological Association

    Neural Dynamics of 3-D Surface Perception: Figure-Ground Separation and Lightness Perception

    Full text link
    This article develops the FACADE theory of three-dimensional (3-D) vision to simulate data concerning how two-dimensional (2-D) pictures give rise to 3-D percepts of occluded and occluding surfaces. The theory suggests how geometrical and contrastive properties of an image can either cooperate or compete when forming the boundary and surface representations that subserve conscious visual percepts. Spatially long-range cooperation and short-range competition work together to separate boundaries of occluding ligures from their occluded neighbors, thereby providing sensitivity to T-junctions without the need to assume that T-junction "detectors" exist. Both boundary and surface representations of occluded objects may be amodaly completed, while the surface representations of unoccluded objects become visible through modal processes. Computer simulations include Bregman-Kanizsa figure-ground separation, Kanizsa stratification, and various lightness percepts, including the Munker-White, Benary cross, and checkerboard percepts.Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI 94-01659, IRI 97-20333); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0657
    corecore