239 research outputs found

    The Impact of 2-D and 3-D Grouping Cues on Depth From Binocular Disparity

    Get PDF
    Stereopsis is a powerful source of information about the relative depth of objects in the world. In isolation, humans can see depth from binocular disparity without any other depth cues. However, many different stimulus properties can dramatically influence the depth we perceive. For example, there is an abundance of research showing that the configuration of a stimulus can impact the percept of depth, in some cases diminishing the amount of depth experience. Much of the previous research has focused on discrimination thresholds; in one example, stereoacuity for a pair of vertical lines was shown to be markedly reduced when these lines were connected to form a rectangle apparently slanted in depth (eg: McKee, 1983). The contribution of Gestalt figural grouping to this phenomenon has not been studied. This dissertation addresses the role that perceptual grouping plays in the recovery of suprathreshold depth from disparity. First, I measured the impact of perceptual closure on depth magnitude. Observers estimated the separation in depth of a pair of vertical lines as the amount of perceptual closure was varied. In a series of experiments, I characterized the 2-D and 3-D properties that contribute to 3-D closure and the estimates of apparent depth. Estimates of perceived depth were highly correlated to the strength of subjective closure. Furthermore, I highlighted the perceptual consequences (both costs and benefits) of a new disparity-based grouping cue that interacts with perceived closure, which I call good stereoscopic continuation. This cue was shown to promote detection in a visual search task but reduces depth percepts compared to isolated features. Taken together, the results reported here show that specific 2-D and 3-D grouping constraints are required to promote recovery of a 3-D object. As a consequence, quantitative depth is reduced, but the object is rapidly detected in a visual search task. I propose that these phenomena are the result of object-based disparity smoothing operations that enhance object cohesion

    The visual perception of distance in action space.

    Get PDF
    This work examines our perception of distance within action space (about 2m ~ 30m), an ability that is important for various actions. Two general problems are addressed: what information can be used to judge distance accurately and how is it processed? The dissertation is in two parts. The first part considers the what question. Subjects\u27 distance judgment was examined in real, altered and virtual environments by using perceptual tasks or actions to assess the role of a variety of intrinsic and environmental depth cues. The findings show that the perception of angular declination, or height in the visual field, is largely veridical and a target is visually located on the projection line from the observer\u27s eyes to it. It is also shown that a continuous ground texture is essential for veridical space perception. Of multiple textural cues, linear perspective is a strong cue for representing the ground and hence judging distance but compression is a relatively ineffective cue. In the second part, the sequential surface integration process (SSIP) hypothesis is proposed to understand the processing of depth information. The hypothesis asserts that an accurate representation of the ground surface is critical for veridical space perception and a global ground representation is formed by an integrative process that samples and combines local information over space and time. Confirming this, the experiments found that information from an extended ground area is necessary for judging distance accurately and distance was underestimated when an observer\u27s view was restricted to the local ground area about the target. The SSIP hypothesis also suggests that, to build an accurate ground representation, the integrative process might start from near space where rich depth cues can provide for a reliable initial representation and then progressively extend to distant areas. This is also confirmed by the finding that subjects could judge distance accurately by scanning local patches of the ground surface from near to far, but not in the reverse direction

    Do we perceive a flattened world on the monitor screen

    Get PDF
    The current model of three-dimensional perception hypothesizes that the brain integrates the depth cues in a statistically optimal fashion through a weighted linear combination with weights proportional to the reliabilities obtained for each cue in isolation (Landy, Maloney, Johnston, & Young, 1995). Even though many investigations support such theoretical framework, some recent empirical findings are at odds with this view (e.g., Domini, Caudek, & Tassinari, 2006). Failures of linear cue integration have been attributed to cue-conflict and to unmodelled cues to flatness present in computer-generated displays. We describe two cue-combination experiments designed to test the integration of stereo and motion cues, in the presence of consistent or conflicting blur and accommodation information (i.e., when flatness cues are either absent, with physical stimuli, or present, with computer-generated displays). In both conditions, we replicated the results of Domini et al. (2006): The amount of perceived depth increased as more cues were available, also producing an over-estimation of depth in some conditions. These results can be explained by the Intrinsic Constraint model, but not by linear cue combination

    Perceptual Requirements for World-Locked Rendering in AR and VR

    Full text link
    Stereoscopic, head-tracked display systems can show users realistic, world-locked virtual objects and environments. However, discrepancies between the rendering pipeline and physical viewing conditions can lead to perceived instability in the rendered content resulting in reduced realism, immersion, and, potentially, visually-induced motion sickness. The requirements to achieve perceptually stable world-locked rendering are unknown due to the challenge of constructing a wide field of view, distortion-free display with highly accurate head- and eye-tracking. In this work we introduce new hardware and software built upon recently introduced hardware and present a system capable of rendering virtual objects over real-world references without perceivable drift under such constraints. The platform is used to study acceptable errors in render camera position for world-locked rendering in augmented and virtual reality scenarios, where we find an order of magnitude difference in perceptual sensitivity between them. We conclude by comparing study results with an analytic model which examines changes to apparent depth and visual heading in response to camera displacement errors. We identify visual heading as an important consideration for world-locked rendering alongside depth errors from incorrect disparity

    Robust cue integration: a Bayesian model and evidence from cueconflict studies with stereoscopic and figure cues to slant.

    Get PDF
    Most research on depth cue integration has focused on stimulus regimes in which stimuli contain the small cue conflicts that one might expect to normally arise from sensory noise. In these regimes, linear models for cue integration provide a good approximation to system performance. This article focuses on situations in which large cue conflicts can naturally occur in stimuli. We describe a Bayesian model for nonlinear cue integration that makes rational inferences about scenes across the entire range of possible cue conflicts. The model derives from the simple intuition that multiple properties of scenes or causal factors give rise to the image information associated with most cues. To make perceptual inferences about one property of a scene, an ideal observer must necessarily take into account the possible contribution of these other factors to the information provided by a cue. In the context of classical depth cues, large cue conflicts most commonly arise when one or another cue is generated by an object or scene that violates the strongest form of constraint that makes the cue informative. For example, when binocularly viewing a slanted trapezoid, the slant interpretation of the figure derived by assuming that the figure is rectangular may conflict greatly with the slant suggested by stereoscopic disparities. An optimal Bayesian estimator incorporates the possibility that different constraints might apply to objects in the world and robustly integrates cues with large conflicts by effectively switching between different internal models of the prior constraints underlying one or both cues. We performed two experiments to test the predictions of the model when applied to estimating surface slant from binocular disparities and the compression cue (the aspect ratio of figures in an image). The apparent weight that subjects gave to the compression cue decreased smoothly as a function of the conflict between the cues but did not shrink to zero; that is, subjects did not fully veto the compression cue at large cue conflicts. A Bayesian model that assumes a mixed prior distribution of figure shapes in the world, with a large proportion being very regular and a smaller proportion having random shapes, provides a good quantitative fit for subjects' performance. The best fitting model parameters are consistent with the sensory noise to be expected in measurements of figure shape, further supporting the Bayesian model as an account of robust cue integration

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles
    • …
    corecore