51 research outputs found
Effects of Clutter on Egocentric Distance Perception in Virtual Reality
To assess the impact of clutter on egocentric distance perception, we
performed a mixed-design study with 60 participants in four different virtual
environments (VEs) with three levels of clutter. Additionally, we compared the
indoor/outdoor VE characteristics and the HMD's FOV. The participants wore a
backpack computer and a wide FOV head-mounted display (HMD) as they
blind-walked towards three distinct targets at distances of 3m, 4.5m, and 6m.
The HMD's field of view (FOV) was programmatically limited to
165{\deg}110{\deg}, 110{\deg}110{\deg}, or
45{\deg}35{\deg}. The results showed that increased clutter in the
environment led to more precise distance judgment and less underestimation,
independent of the FOV. In comparison to outdoor VEs, indoor VEs showed more
accurate distance judgment. Additionally, participants made more accurate
judgements while looking at the VEs through wider FOVs.Comment: This paper was not published yet in any venue or conference/journal,
ACM conference format was used for the paper, authors were listed in order
from first to last (advisor), 10 pages, 10 figure
Blind Direct Walking Distance Judgment Research: A Best Practices Guide
Over the last 30 years, Virtual Reality (VR) research has shown that distance perception in VR is compressed as compared to the real world. The full reason for this is yet unknown. Though many experiments have been run to study the underlying reasons for this compression, often with similar procedures, the experimental details either show significant variation between experiments or go unreported. This makes it difficult to accurately repeat or compare experiments, as well as negatively impacts new researchers trying to learn and follow current best practices. In this paper, we present a review of past research and things that are typically left unreported. Using this and the practices of my advisor as evidence, we suggest a standard to assist researchers in performing quality research pertaining to blind direct walking distance judgments in VR
Direct and gestural interaction with relief: A 2.5D shape display
Actuated shape output provides novel opportunities for experiencing, creating and manipulating 3D content in the physical world. While various shape displays have been proposed, a common approach utilizes an array of linear actuators to form 2.5D surfaces. Through identifying a set of common interactions for viewing and manipulating content on shape displays, we argue why input modalities beyond direct touch are required. The combination of freehand gestures and direct touch provides additional degrees of freedom and resolves input ambiguities, while keeping the locus of interaction on the shape output. To demonstrate the proposed combination of input modalities and explore applications for 2.5D shape displays, two example scenarios are implemented on a prototype system
Purkinje images: Conveying different content for different luminance adaptations in a single image
Providing multiple meanings in a single piece of art has always been intriguing to both artists and observers. We present Purkinje images, which have different interpretations depending on the luminance adaptation of the observer. Finding such images is an optimization that minimizes the sum of the distance to one reference image in photopic conditions and the distance to another reference image in scotopic conditions. To model the shift of image perception between day and night vision, we decompose the input images into a Laplacian pyramid. Distances under different observation conditions in this representation are independent between pyramid levels and pixel positions and become matrix multiplications. The optimal pixel colour can be found by inverting a small, per-pixel linear system in real time on a GPU. Finally, two user studies analyze our results in terms of the recognition performance and fidelity with respect to the reference images. Providing multiple meanings in a single piece of art has always been intriguing to both artists and observers. We present Purkinje images, which have different interpretations depending on the luminance adaptation of the observer. Finding such images is an optimization that minimizes the sum of the distance to one reference image in photopic conditions and the distance to another reference image in scotopic conditions. To model the shift of image perception between day and night vision, we decompose the input images into a Laplacian pyramid. © 2014 The Eurographics Association and John Wiley & Sons Ltd
Peripheral visual cues and their effect on the perception of egocentric depth in virtual and augmented environments
The underestimation of depth in virtual environments at mediumield distances is a well studied phenomenon. However, the degree by which underestimation occurs varies widely from one study to the next, with some studies reporting as much as 68% underestimation in distance and others with as little as 6% (Thompson et al. [38] and Jones et al. [14]). In particular, the study detailed in Jones et al. [14] found a surprisingly small underestimation effect in a virtual environment (VE) and no effect in an augmented environment (AE). These are highly unusual results when compared to the large body of existing work in virtual and augmented distance judgments [16, 31, 36–38, 40–43]. The series of experiments described in this document attempted to determine the cause of these unusual results. Specifically, Experiment I aimed to determine if the experimental design was a factor and also to determine if participants were improving their performance throughout the course of the experiment. Experiment II analyzed two possible sources of implicit feedback in the experimental procedures and identified visual information available in the lower periphery as a key source of feedback. Experiment III analyzed distance estimation when all peripheral visual information was eliminated. Experiment IV then illustrated that optical flow in a participant’s periphery is a key factor in facilitating improved depth judgments in both virtual and augmented environments. Experiment V attempted to further reduce cues in the periphery by removing a strongly contrasting white surveyor’s tape from the center of the hallway, and found that participants continued to significantly adapt even when given very sparse peripheral cues. The final experiment, Experiment VI, found that when participants’ views are restricted to the field-of-view of the screen area on the return walk, adaptation still occurs in both virtual and augmented environments
Transfer of albedo and local depth variation to photo-textures
Acquisition of displacement and albedo maps for full building façades is a difficult problem and traditionally achieved through a labor intensive artistic process. In this paper, we present a material appearance transfer method, Transfer by Analogy, designed to infer surface detail and diffuse reflectance for textured surfaces like the present in building façades. We begin by acquiring small exemplars (displacement and albedo maps), in accessible areas, where capture conditions can be controlled. We then transfer these properties to a complete phototexture constructed from reference images and captured under diffuse daylight illumination. Our approach allows super-resolution inference of albedo and displacement from information in the photo-texture. When transferring appearance from multiple exemplars to façades containing multiple materials, our approach also sidesteps the need for segmentation. We show how we use these methods to create relightable models with a high degree of texture detail, reproducing the visually rich self-shadowing effects that would normally be difficult to capture using just simple consumer equipment. Copyright © 2012 by the Association for Computing Machinery, Inc
Exploring Users' Pointing Performance on Virtual and Physical Large Curved Displays
Large curved displays have emerged as a powerful platform for collaboration,
data visualization, and entertainment. These displays provide highly immersive
experiences, a wider field of view, and higher satisfaction levels. Yet, large
curved displays are not commonly available due to their high costs. With the
recent advancement of Head Mounted Displays (HMDs), large curved displays can
be simulated in Virtual Reality (VR) with minimal cost and space requirements.
However, to consider the virtual display as an alternative to the physical
display, it is necessary to uncover user performance differences (e.g.,
pointing speed and accuracy) between these two platforms. In this paper, we
explored users' pointing performance on both physical and virtual large curved
displays. Specifically, with two studies, we investigate users' performance
between the two platforms for standard pointing factors such as target width,
target amplitude as well as users' position relative to the screen. Results
from user studies reveal no significant difference in pointing performance
between the two platforms when users are located at the same position relative
to the screen. In addition, we observe users' pointing performance improves
when they are located at the center of a semi-circular display compared to
off-centered positions. We conclude by outlining design implications for
pointing on large curved virtual displays. These findings show that large
curved virtual displays are a viable alternative to physical displays for
pointing tasks.Comment: In 29th ACM Symposium on Virtual Reality Software and Technology
(VRST 2023
Local Light Alignment for Multi-Scale Shape Depiction
International audienceMotivated by recent findings in the field of visual perception, we present a novel approach for enhancing shape depiction and perception of surface details. We propose a shading-based technique that relies on locally adjusting the direction of light to account for the different components of materials. Our approach ensures congruence between shape and shading flows, leading to an effective enhancement of the perception of shape and details while impairing neither the lighting nor the appearance of materials. It is formulated in a general way allowing its use for multiple scales enhancement in real-time on the GPU, as well as in global illumination contexts. We also provide artists with fine control over the enhancement at each scale
Perceptual Requirements for World-Locked Rendering in AR and VR
Stereoscopic, head-tracked display systems can show users realistic,
world-locked virtual objects and environments. However, discrepancies between
the rendering pipeline and physical viewing conditions can lead to perceived
instability in the rendered content resulting in reduced realism, immersion,
and, potentially, visually-induced motion sickness. The requirements to achieve
perceptually stable world-locked rendering are unknown due to the challenge of
constructing a wide field of view, distortion-free display with highly accurate
head- and eye-tracking. In this work we introduce new hardware and software
built upon recently introduced hardware and present a system capable of
rendering virtual objects over real-world references without perceivable drift
under such constraints. The platform is used to study acceptable errors in
render camera position for world-locked rendering in augmented and virtual
reality scenarios, where we find an order of magnitude difference in perceptual
sensitivity between them. We conclude by comparing study results with an
analytic model which examines changes to apparent depth and visual heading in
response to camera displacement errors. We identify visual heading as an
important consideration for world-locked rendering alongside depth errors from
incorrect disparity
- …