40,425 research outputs found

    Controlled Interaction: Strategies For Using Virtual Reality To Study Perception

    Get PDF
    Immersive virtual reality systems employing head-mounted displays offer great promise for the investigation of perception and action, but there are well-documented limitations to most virtual reality systems. In the present article, we suggest strategies for studying perception/action interactions that try to depend on both scale-invariant metrics (such as power function exponents) and careful consideration of the requirements of the interactions under investigation. New data concerning the effect of pincushion distortion on the perception of surface orientation are presented, as well as data documenting the perception of dynamic distortions associated with head movements with uncorrected optics. A review of several successful uses of virtual reality to study the interaction of perception and action emphasizes scale-free analysis strategies that can achieve theoretical goals while minimizing assumptions about the accuracy of virtual simulations

    The effects of belongingness on the Simultaneous Lightness Contrast: A virtual reality study

    Get PDF
    Simultaneous Lightness Contrast (SLC) is the phenomenon whereby a grey patch on a dark background appears lighter than an equal patch on a light background. Interestingly, the lightness difference between these patches undergoes substantial augmentation when the two backgrounds are patterned, thereby forming the articulated-SLC display. There are two main interpretations of these phenomena: The midlevel interpretation maintains that the visual system groups the luminance within a set of contiguous frameworks, whilst the high-level one claims that the visual system splits the luminance into separate overlapping layers corresponding to separate physical contributions. This research aimed to test these two interpretations by systematically manipulating the viewing distance and the horizontal distance between the backgrounds of both the articulated and plain SLC displays. An immersive 3D Virtual Reality system was employed to reproduce identical alignment and distances, as well as isolating participants from interfering luminance. Results showed that reducing the viewing distance resulted in increased contrast in both the plain- and articulated-SLC displays and that, increasing the horizontal distance between the backgrounds resulted in decreased contrast in the articulated condition but increased contrast in the plain condition. These results suggest that a comprehensive lightness theory should combine the two interpretations

    Fidelity metrics for virtual environment simulations based on spatial memory awareness states

    Get PDF
    This paper describes a methodology based on human judgments of memory awareness states for assessing the simulation fidelity of a virtual environment (VE) in relation to its real scene counterpart. To demonstrate the distinction between task performance-based approaches and additional human evaluation of cognitive awareness states, a photorealistic VE was created. Resulting scenes displayed on a headmounted display (HMD) with or without head tracking and desktop monitor were then compared to the real-world task situation they represented, investigating spatial memory after exposure. Participants described how they completed their spatial recollections by selecting one of four choices of awareness states after retrieval in an initial test and a retention test a week after exposure to the environment. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection and also included guesses, even if informed. Experimental results revealed variations in the distribution of participants’ awareness states across conditions while, in certain cases, task performance failed to reveal any. Experimental conditions that incorporated head tracking were not associated with visually induced recollections. Generally, simulation of task performance does not necessarily lead to simulation of the awareness states involved when completing a memory task. The general premise of this research focuses on how tasks are achieved, rather than only on what is achieved. The extent to which judgments of human memory recall, memory awareness states, and presence in the physical and VE are similar provides a fidelity metric of the simulation in question

    The Underestimation Of Egocentric Distance: Evidence From Frontal Matching Tasks

    Get PDF
    There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues

    Perceptual Scale Expansion: An Efficient Angular Coding Strategy For Locomotor Space

    Get PDF
    Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration

    Perceived Space in the HTC Vive

    Get PDF
    Underperception of egocentric distance in virtual reality has been a persistent concern for almost 20 years. Modern headmounted displays (HMDs) appear to have begun to ameliorate underperception. The current study examined several aspects of perceived space in the HTC Vive. Blind-walking distance judgments, verbal distance judgments, and size judgments were measured in two distinct virtual environments (VEs)—a high-quality replica of a real classroom and an empty grass field—as well as the real classroom upon which the classroom VE was modeled. A brief walking interaction was also examined as an intervention for improving anticipated underperception in the VEs. Results from the Vive were compared to existing data using two older HMDs (nVisor SX111 and ST50). Blind-walking judgments were more accurate in the Vive compared to the older displays, and did not differ substantially from the real world nor across VEs. Size judgments were more accurate in the classroom VE than the grass VE and in the Vive compared to the older displays. Verbal judgments were significantly smaller in the classroom VE compared to the real classroom and did not significantly differ across VEs. Blind-walking and size judgments were more accurate after walking interaction, but verbal judgments were unaffected. The results indicate that underperception of distance in the HTC Vive is less than in older displays but has not yet been completely resolved. With more accurate space perception afforded by modern HMDs, alternative methods for improving judgments of perceived space—such as walking interaction—may no longer be necessary

    The Effect of Anthropometric Properties of Self-Avatars on Action Capabilities in Virtual Reality

    Get PDF
    The field of Virtual Reality (VR) has seen a steady exponential uptake in the last decade and is being continuously incorporated into areas of popular interest like healthcare, training, recreation and gaming. This steady upward trend and prolonged popularity has resulted in numerous extravagant virtual environments, some that aim to mimic real-life experiences like combat training, while others intend to provide unique experiences that may otherwise be difficult to recreate like flying over ancient Egypt as a bird. These experiences often showcase highly realistic graphics, intuitive interactions and unique avatar embodiment scenarios with the help of various tracking sensors, high definition graphic displays, sound systems, etc. The literature suggests that estimates and affordance judgments in VR scenarios such as the ones described above are affected by the properties and the nature of the avatar embodied by the user. Therefore, to provide users with the finest experiences it is crucial to understand the interaction between the embodied self and the action capabilities afforded by it in the surrounding virtual environment. In a series of studies aimed at exploring the effect of gender matched body-scaled self-avatars on the user\u27s perception, we investigate the effect of self-avatars on the perception of size of objects in an immersive virtual environment (IVE) and how this perception affects the actions one can perform as compared to the real world. In the process, we make use of newer tracking technology and graphic displays to investigate the perceived differences between real world environments and their virtual counterparts to understand how the spatial properties of the environment and the embodied self-avatars affect affordances by means of passability judgments. We describe techniques for creation and mapping VR environments onto their real world counterparts and the creation of gender matched body-scaled self-avatars that provides real time full-body tracking. The first two studies investigate how newer graphical displays and off-the-shelf tracking devices can be utilized to create salient gender matched body-scaled self-avatars and their effect on the judgment of passability as a result of the embodied body schema. The study involves creating complex scripts that automate the process of mapping virtual worlds onto their real world counterparts within a 1cm margin of error and the creation of self-avatars that match height, limb proportions and shoulder width of the participant using tracking sensors. The experiment involves making judgments about the passability of an adjustable doorway in the real world and in a virtual to-scale replica of the real world environment. The results demonstrated that the perception of affordances in IVEs is comparable to the real world but the behavior leading to it differs in VR. Also, the body-scaled self-avatars generated provide salient information yielding performance similar to the real world. Several insights and guidelines related to creating veridical virtual environments and realistic self-avatars were achieved from this effort. The third study investigates how the presence of body-scaled self-avatars affects the perception of size of virtual handheld objects and the influence of the person-plus-virtual-object system created by lifting the said virtual object on passability. This is crucial to understand as VR simulations now often utilize self-avatars that carry objects while maneuvering through the environment. How they interact with these handheld objects can influence what they do in critical scenarios where split second decisions can change the outcome like combat training, role-playing games, first person shooting, thrilling rides, physiotherapy, etc. It has also been reported that the avatar itself can influence the perception of size of virtual objects, in turn influencing action capabilities. There is ample research on different interaction techniques to manipulate objects in a virtual world but the question about how the objects affect our action capabilities upon interaction remains unanswered, especially when the haptic feedback associated with holding a real object is mismatched or missing. The study investigates this phenomenon by having participants interact with virtual objects of different sizes and making frontal and lateral passability judgments to an adjustable aperture similar to the first experiment. The results suggest that the presence of self-avatars significantly affects affordance judgments. Interestingly, frontal and lateral judgments in IVEs seem to similar unlike the real world. Investigating the concept of embodied body schema and its influence on action-capabilities further, the fourth study looks at how embodying self-avatars that may vary slightly from your real world body affect performance and behavior in dynamic affordance scenarios. In this particular study, we change the eye height of the participants in the presence or absence of self-avatars that are either bigger, smaller or the same size as the participant. We then investigate how this change in eye height and anthropometric properties of the self-avatar affects their judgments when crossing streets with oncoming traffic in virtual reality. We also evaluate any changes in the perceived walking speed as a result of embodying altered self-avatars. The findings suggest that the presence of self-avatars results in safer crossing behavior, however scaling the eye height or the avatar does not seem to affect the perceived walking speed. A detailed discussion on all the findings can be found in the manuscript

    Near-Field Depth Perception in See-Through Augmented Reality

    Get PDF
    This research studied egocentric depth perception in an augmented reality (AR) environment. Specifically, it involved measuring depth perception in the near visual field by using quantitative methods to measure the depth relationships between real and virtual objects. This research involved two goals; first, engineering a depth perception measurement apparatus and related calibration andmeasuring techniques for collecting depth judgments, and second, testing its effectiveness by conducting an experiment. The experiment compared two complimentary depth judgment protocols: perceptual matching (a closed-loop task) and blind reaching (an open-loop task). It also studied the effect of a highly salient occluding surface; this surface appeared behind, coincident with, and in front of virtual objects. Finally, the experiment studied the relationship between dark vergence and depth perception

    Impact of model fidelity in factory layout assessment using immersive discrete event simulation

    Get PDF
    Discrete Event Simulation (DES) can help speed up the layout design process. It offers further benefits when combined with Virtual Reality (VR). The latest technology, Immersive Virtual Reality (IVR), immerses users in virtual prototypes of their manufacturing plants to-be, potentially helping decision-making. This work seeks to evaluate the impact of visual fidelity, which refers to the degree to which objects in VR conforms to the real world, using an IVR visualisation of the DES model of an actual shop floor. User studies are performed using scenarios populated with low- and high-fidelity models. Study participant carried out four tasks representative of layout decision-making. Limitations of existing IVR technology was found to cause motion sickness. The results indicate with the particular group of naïve modellers used that there is no significant difference in benefits between low and high fidelity, suggesting that low fidelity VR models may be more cost-effective for this group
    corecore