11,163 research outputs found

    Virtual Distance Estimation in a CAVE

    Get PDF
    Past studies have shown consistent underestimation of distances in virtual reality, though the exact causes remain unclear. Many virtual distance cues have been investigated, but past work has failed to account for the possible addition of cues from the physical environment. We describe two studies that assess users’ performance and strategies when judging horizontal and vertical distances in a CAVE. Results indicate that users attempt to leverage cues from the physical environment when available and, if allowed, use a locomotion interface to move the virtual viewpoint to facilitate this.FUI in the framework of the Callisto projec

    Virtual Distance Estimation in a CAVE

    No full text
    Past studies have shown consistent underestimation of distances in virtual reality, though the exact causes remain unclear. Many virtual distance cues have been investigated, but past work has failed to account for the possible addition of cues from the physical environment. We describe two studies that assess users' performance and strategies when judging horizontal and vertical distances in a CAVE. Results indicate that users attempt to leverage cues from the physical environment when available and, if allowed, use a locomotion interface to move the virtual viewpoint to facilitate this

    Estimation of distances in virtual environments using size constancy

    No full text
    It is reported in the literature that distances from the observer are underestimated more in virtual environments (VEs) than in physical world conditions. On the other hand estimation of size in VEs is quite accurate and follows a size-constancy law when rich cues are present. This study investigates how estimation of distance in a CAVETM environment is affected by poor and rich cue conditions, subject experience, and environmental learning when the position of the objects is estimated using an experimental paradigm that exploits size constancy. A group of 18 healthy participants was asked to move a virtual sphere controlled using the wand joystick to the position where they thought a previously-displayed virtual cube (stimulus) had appeared. Real-size physical models of the virtual objects were also presented to the participants as a reference of real physical distance during the trials. An accurate estimation of distance implied that the participants assessed the relative size of sphere and cube correctly. The cube appeared at depths between 0.6 m and 3 m, measured along the depth direction of the CAVE. The task was carried out in two environments: a poor cue one with limited background cues, and a rich cue one with textured background surfaces. It was found that distances were underestimated in both poor and rich cue conditions, with greater underestimation in the poor cue environment. The analysis also indicated that factors such as subject experience and environmental learning were not influential. However, least square fitting of Stevens’ power law indicated a high degree of accuracy during the estimation of object locations. This accuracy was higher than in other studies which were not based on a size-estimation paradigm. Thus as indirect result, this study appears to show that accuracy when estimating egocentric distances may be increased using an experimental method that provides information on the relative size of the objects used

    Cavlectometry: Towards Holistic Reconstruction of Large Mirror Objects

    Full text link
    We introduce a method based on the deflectometry principle for the reconstruction of specular objects exhibiting significant size and geometric complexity. A key feature of our approach is the deployment of an Automatic Virtual Environment (CAVE) as pattern generator. To unfold the full power of this extraordinary experimental setup, an optical encoding scheme is developed which accounts for the distinctive topology of the CAVE. Furthermore, we devise an algorithm for detecting the object of interest in raw deflectometric images. The segmented foreground is used for single-view reconstruction, the background for estimation of the camera pose, necessary for calibrating the sensor system. Experiments suggest a significant gain of coverage in single measurements compared to previous methods. To facilitate research on specular surface reconstruction, we will make our data set publicly available

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin

    The Effects of Finger-Walking in Place (FWIP) on Spatial Knowledge Acquisition in Virtual Environments

    Get PDF
    Spatial knowledge, necessary for efficient navigation, comprises route knowledge (memory of landmarks along a route) and survey knowledge (overall representation like a map). Virtual environments (VEs) have been suggested as a power tool for understanding some issues associated with human navigation, such as spatial knowledge acquisition. The Finger-Walking-in-Place (FWIP) interaction technique is a locomotion technique for navigation tasks in immersive virtual environments (IVEs). The FWIP was designed to map a human’s embodied ability overlearned by natural walking for navigation, to finger-based interaction technique. Its implementation on Lemur and iPhone/iPod Touch devices was evaluated in our previous studies. In this paper, we present a comparative study of the joystick’s flying technique versus the FWIP. Our experiment results show that the FWIP results in better performance than the joystick’s flying for route knowledge acquisition in our maze navigation tasks
    corecore