5,853 research outputs found

    Driving experience of an indirect vision cockpit(本文)

    Get PDF

    The effects of changing projection geometry on perception of 3D objects on and around tabletops

    Get PDF
    Funding: Natural Sciences and Engineering Research Council of Canada Networks of Centres of Excellence of Canada.Displaying 3D objects on horizontal displays can cause problems in the way that the virtual scene is presented on the 2D surface; inappropriate choices in how 3D is represented can lead to distorted images and incorrect object interpretations. We present four experiments that test 3D perception. We varied projection geometry in three ways: type of projection (perspective/parallel), separation between the observer’s point of view and the projection’s center (discrepancy), and the presence of motion parallax (with/without parallax). Projection geometry had strong effects different for each task. Reducing discrepancy is desirable for orientation judgments, but not for object recognition or internal angle judgments. Using a fixed center of projection above the table reduces error and improves accuracy in most tasks. The results have far-reaching implications for the design of 3D views on tables, in particular for multi-user applications where projections that appear correct for one person will not be perceived correctly by another.PostprintPeer reviewe

    Enhancing BIM Methodology with VR Technology

    Get PDF
    Building information modeling (BIM) is defined as the process of generating, storing, managing, exchanging, and sharing building information. In the construction industry, the processes and technologies that support BIM are constantly evolving, making the BIM even more attractive. A current topic that requires attention is the integration of BIM with virtual reality (VR) where the user visualizes a virtual world and can interact with it. By adding VR, the BIM solution can address retrieving and presenting information and increasing efficiency on communication and problem solving in an interactive and collaborative project. The objective of this chapter is to report the improvement of BIM uses with the addition of interactive capacities allowed by VR technology. A bibliographic and software research was made to support the study

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Near-Field Depth Perception in Optical See-Though Augmented Reality

    Get PDF
    Augmented reality (AR) is a very promising display technology with many compelling industrial applications. However, before it can be used in actual settings, its fidelity needs to be investigated from a user-centric viewpoint. More specifically, how distance to the virtual objects is perceived in augmented reality is still an open question. To the best of our knowledge, there are only four previous studies that specifically studied distance perception in AR within reaching distances. Therefore, distance perception in augmented reality still remains a largely understudied phenomenon. This document presents research in depth perception in augmented reality in the near visual field. The specific goal of this research is to empirically study various measurement techniques for depth perception, and to study various factors that affect depth perception in augmented reality, specifically, eye accommodation, brightness, and participant age. This document discusses five experiments that have already been conducted. Experiment I aimed to determine if there are inherent difference between the perception of virtual and real objects by comparing depth judgments using two complementary distance judgment protocols: perceptual matching and blind reaching. This experiment found that real objects are perceived more accurately than virtual objects and matching is a relatively more accurate distance measure than reaching. Experiment II compared the two distance judgment protocols in the real world and augmented reality environments, with improved proprioceptive and visual feedback. This experiment found that reaching responses in the AR environment became more accurate with improved feedback. Experiment III studied the effect of different levels of accommodative demand (collimated, consistent, and midpoint) on distance judgments. This experiment found nearly accurate distance responses in the consistent and midpoint conditions, and a linear increase in error in the collimated condition. Experiment IV studied the effect of brightness of the target object on depth judgments. This experiment found that distance responses were shifted towards background for the dim AR target. Lastly, Experiment V studied the effect of participant age on depth judgments and found that older participants judged distance more accurately than younger participants. Taken together, these five experiments will help us understand how depth perception operates in augmented reality

    Stereoscopic human interfaces

    Get PDF
    This article focuses on the use of stereoscopic video interfaces for telerobotics. Topics concerning human visual perception, binocular image capturing, and stereoscopic devices are described. There is a wide variety of video interfaces for telerobotic systems. Choosing the best video interface depends on the telerobotic application requirements. Simple monoscopic cameras are good enough for watching remote robot movements or for teleprogramming a sequence of commands. However, when operators seek precise robot guidance or wish to manipulate objects, a better perception of the remote environment must be achieved, for which more advanced visual interfaces are required. This implies a higher degree of telepresence, and, therefore, the most suitable visual interface has to be chosen. The aim of this article is to describe the two main aspects using stereoscopic interfaces: the capture of binocular video images, according to the disparity limits in human perception and the proper selection of the visualization interface for stereoscopic images

    Peripheral visual cues and their effect on the perception of egocentric depth in virtual and augmented environments

    Get PDF
    The underestimation of depth in virtual environments at mediumield distances is a well studied phenomenon. However, the degree by which underestimation occurs varies widely from one study to the next, with some studies reporting as much as 68% underestimation in distance and others with as little as 6% (Thompson et al. [38] and Jones et al. [14]). In particular, the study detailed in Jones et al. [14] found a surprisingly small underestimation effect in a virtual environment (VE) and no effect in an augmented environment (AE). These are highly unusual results when compared to the large body of existing work in virtual and augmented distance judgments [16, 31, 36–38, 40–43]. The series of experiments described in this document attempted to determine the cause of these unusual results. Specifically, Experiment I aimed to determine if the experimental design was a factor and also to determine if participants were improving their performance throughout the course of the experiment. Experiment II analyzed two possible sources of implicit feedback in the experimental procedures and identified visual information available in the lower periphery as a key source of feedback. Experiment III analyzed distance estimation when all peripheral visual information was eliminated. Experiment IV then illustrated that optical flow in a participant’s periphery is a key factor in facilitating improved depth judgments in both virtual and augmented environments. Experiment V attempted to further reduce cues in the periphery by removing a strongly contrasting white surveyor’s tape from the center of the hallway, and found that participants continued to significantly adapt even when given very sparse peripheral cues. The final experiment, Experiment VI, found that when participants’ views are restricted to the field-of-view of the screen area on the return walk, adaptation still occurs in both virtual and augmented environments
    corecore