2,049 research outputs found

    The Effect of Anthropometric Properties of Self-Avatars on Action Capabilities in Virtual Reality

    Get PDF
    The field of Virtual Reality (VR) has seen a steady exponential uptake in the last decade and is being continuously incorporated into areas of popular interest like healthcare, training, recreation and gaming. This steady upward trend and prolonged popularity has resulted in numerous extravagant virtual environments, some that aim to mimic real-life experiences like combat training, while others intend to provide unique experiences that may otherwise be difficult to recreate like flying over ancient Egypt as a bird. These experiences often showcase highly realistic graphics, intuitive interactions and unique avatar embodiment scenarios with the help of various tracking sensors, high definition graphic displays, sound systems, etc. The literature suggests that estimates and affordance judgments in VR scenarios such as the ones described above are affected by the properties and the nature of the avatar embodied by the user. Therefore, to provide users with the finest experiences it is crucial to understand the interaction between the embodied self and the action capabilities afforded by it in the surrounding virtual environment. In a series of studies aimed at exploring the effect of gender matched body-scaled self-avatars on the user\u27s perception, we investigate the effect of self-avatars on the perception of size of objects in an immersive virtual environment (IVE) and how this perception affects the actions one can perform as compared to the real world. In the process, we make use of newer tracking technology and graphic displays to investigate the perceived differences between real world environments and their virtual counterparts to understand how the spatial properties of the environment and the embodied self-avatars affect affordances by means of passability judgments. We describe techniques for creation and mapping VR environments onto their real world counterparts and the creation of gender matched body-scaled self-avatars that provides real time full-body tracking. The first two studies investigate how newer graphical displays and off-the-shelf tracking devices can be utilized to create salient gender matched body-scaled self-avatars and their effect on the judgment of passability as a result of the embodied body schema. The study involves creating complex scripts that automate the process of mapping virtual worlds onto their real world counterparts within a 1cm margin of error and the creation of self-avatars that match height, limb proportions and shoulder width of the participant using tracking sensors. The experiment involves making judgments about the passability of an adjustable doorway in the real world and in a virtual to-scale replica of the real world environment. The results demonstrated that the perception of affordances in IVEs is comparable to the real world but the behavior leading to it differs in VR. Also, the body-scaled self-avatars generated provide salient information yielding performance similar to the real world. Several insights and guidelines related to creating veridical virtual environments and realistic self-avatars were achieved from this effort. The third study investigates how the presence of body-scaled self-avatars affects the perception of size of virtual handheld objects and the influence of the person-plus-virtual-object system created by lifting the said virtual object on passability. This is crucial to understand as VR simulations now often utilize self-avatars that carry objects while maneuvering through the environment. How they interact with these handheld objects can influence what they do in critical scenarios where split second decisions can change the outcome like combat training, role-playing games, first person shooting, thrilling rides, physiotherapy, etc. It has also been reported that the avatar itself can influence the perception of size of virtual objects, in turn influencing action capabilities. There is ample research on different interaction techniques to manipulate objects in a virtual world but the question about how the objects affect our action capabilities upon interaction remains unanswered, especially when the haptic feedback associated with holding a real object is mismatched or missing. The study investigates this phenomenon by having participants interact with virtual objects of different sizes and making frontal and lateral passability judgments to an adjustable aperture similar to the first experiment. The results suggest that the presence of self-avatars significantly affects affordance judgments. Interestingly, frontal and lateral judgments in IVEs seem to similar unlike the real world. Investigating the concept of embodied body schema and its influence on action-capabilities further, the fourth study looks at how embodying self-avatars that may vary slightly from your real world body affect performance and behavior in dynamic affordance scenarios. In this particular study, we change the eye height of the participants in the presence or absence of self-avatars that are either bigger, smaller or the same size as the participant. We then investigate how this change in eye height and anthropometric properties of the self-avatar affects their judgments when crossing streets with oncoming traffic in virtual reality. We also evaluate any changes in the perceived walking speed as a result of embodying altered self-avatars. The findings suggest that the presence of self-avatars results in safer crossing behavior, however scaling the eye height or the avatar does not seem to affect the perceived walking speed. A detailed discussion on all the findings can be found in the manuscript

    The Effects of Head-Centric Rest Frames on Egocentric Distance Perception in Virtual Reality

    Get PDF
    It has been shown through several research investigations that users tend to underestimate distances in virtual reality (VR). Virtual objects that appear close to users wearing a Head-mounted display (HMD) might be located at a farther distance in reality. This discrepancy between the actual distance and the distance observed by users in VR was found to hinder users from benefiting from the full in-VR immersive experience, and several efforts have been directed toward finding the causes and developing tools that mitigate this phenomenon. One hypothesis that stands out in the field of spatial perception is the rest frame hypothesis (RFH), which states that visual frames of reference (RFs), defined as fixed reference points of view in a virtual environment (VE), contribute to minimizing sensory mismatch. RFs have been shown to promote better eye-gaze stability and focus, reduce VR sickness, and improve visual search, along with other benefits. However, their effect on distance perception in VEs has not been evaluated. To explore and better understand the potential effects that RFs can have on distance perception in VR, we used a blind walking task to explore the effect of three head-centric RFs (a mesh mask, a nose, and a hat) on egocentric distance estimation. We performed a mixed-design study where we compared the effect of each of our chosen RFs across different environmental conditions and target distances in different 3D environments. We found that at near and mid-field distances, certain RFs can improve the user\u27s distance estimation accuracy and reduce distance underestimation. Additionally, we found that participants judged distance more accurately in cluttered environments compared to uncluttered environments. Our findings show that the characteristics of the 3D environment are important in distance estimation-dependent tasks in VR and that the addition of head-centric RFs, a simple avatar augmentation method, can lead to meaningful improvements in distance judgments, user experience, and task performance in VR

    Up, down, near, far: an online vestibular contribution to distance judgement

    Get PDF
    Whether a visual stimulus seems near or far away depends partly on its vertical elevation. Contrasting theories suggest either that perception of distance could vary with elevation, because of memory of previous upwards efforts in climbing to overcome gravity, or because of fear of falling associated with the downwards direction. The vestibular system provides a fundamental signal for the downward direction of gravity, but the relation between this signal and depth perception remains unexplored. Here we report an experiment on vestibular contributions to depth perception, using Virtual Reality. We asked participants to judge the absolute distance of an object presented on a plane at different elevations during brief artificial vestibular inputs. Relative to distance estimates collected with the object at the level of horizon, participants tended to overestimate distances when the object was presented above the level of horizon and the head was tilted upward and underestimate them when the object was presented below the level of horizon. Interestingly, adding artificial vestibular inputs strengthened these distance biases, showing that online multisensory signals, and not only stored information, contribute to such distance illusions. Our results support the gravity theory of depth perception, and show that vestibular signals make an on-line contribution to the perception of effort, and thus of distance

    Improving everyday computing tasks with head-mounted displays

    Get PDF
    The proliferation of consumer-affordable head-mounted displays (HMDs) has brought a rash of entertainment applications for this burgeoning technology, but relatively little research has been devoted to exploring its potential home and office productivity applications. Can the unique characteristics of HMDs be leveraged to improve users’ ability to perform everyday computing tasks? My work strives to explore this question. One significant obstacle to using HMDs for everyday tasks is the fact that the real world is occluded while wearing them. Physical keyboards remain the most performant devices for text input, yet using a physical keyboard is difficult when the user can’t see it. I developed a system for aiding users typing on physical keyboards while wearing HMDs and performed a user study demonstrating the efficacy of my system. Building on this foundation, I developed a window manager optimized for use with HMDs and conducted a user survey to gather feedback. This survey provided evidence that HMD-optimized window managers can provide advantages that are difficult or impossible to achieve with standard desktop monitors. Participants also provided suggestions for improvements and extensions to future versions of this window manager. I explored the issue of distance compression, wherein users tend to underestimate distances in virtual environments relative to the real world, which could be problematic for window managers or other productivity applications seeking to leverage the depth dimension through stereoscopy. I also investigated a mitigation technique for distance compression called minification. I conducted multiple user studies, providing evidence that minification makes users’ distance judgments in HMDs more accurate without causing detrimental perceptual side effects. This work also provided some valuable insight into the human perceptual system. Taken together, this work represents valuable steps toward leveraging HMDs for everyday home and office productivity applications. I developed functioning software for this purpose, demonstrated its efficacy through multiple user studies, and also gathered feedback for future directions by having participants use this software in simulated productivity tasks

    Mutual Gaze Support in Videoconferencing Reviewed

    Get PDF
    Videoconferencing allows geographically dispersed parties to communicate by simultaneous audio and video transmissions. It is used in a variety of application scenarios with a wide range of coordination needs and efforts, such as private chat, discussion meetings, and negotiation tasks. In particular, in scenarios requiring certain levels of trust and judgement non-verbal communication, cues are highly important for effective communication. Mutual gaze support plays a central role in those high coordination need scenarios but generally lacks adequate technical support from videoconferencing systems. In this paper, we review technical concepts and implementations for mutual gaze support in videoconferencing, classify them, evaluate them according to a defined set of criteria, and give recommendations for future developments. Our review gives decision makers, researchers, and developers a tool to systematically apply and further develop videoconferencing systems in serious settings requiring mutual gaze. This should lead to well-informed decisions regarding the use and development of this technology and to a more widespread exploitation of the benefits of videoconferencing in general. For example, if videoconferencing systems supported high-quality mutual gaze in an easy-to-set-up and easy-to-use way, we could hold more effective and efficient recruitment interviews, court hearings, or contract negotiations

    Expanding the bounds of seated virtual workspaces

    Get PDF
    Mixed Reality (MR), Augmented Reality (AR) and Virtual Reality (VR) headsets can improve upon existing physical multi-display environments by rendering large, ergonomic virtual display spaces whenever and wherever they are needed. However, given the physical and ergonomic limitations of neck movement, users may need assistance to view these display spaces comfortably. Through two studies, we developed new ways of minimising the physical effort and discomfort of viewing such display spaces. We first explored how the mapping between gaze angle and display position could be manipulated, helping users view wider display spaces than currently possible within an acceptable and comfortable range of neck movement. We then compared our implicit control of display position based on head orientation against explicit user control, finding significant benefits in terms of user preference, workload and comfort for implicit control. Our novel techniques create new opportunities for productive work by leveraging MR headsets to create interactive wide virtual workspaces with improved comfort and usability. These workspaces are flexible and can be used on-the-go, e.g., to improve remote working or make better use of commuter journeys

    Apparent extended body motions in depth.

    Get PDF

    UNDERSTANDING INTERACTIVE EXPERIENCES: PERCEIVED INTERACTIVITY AND PRESENCE WITH AND WITHOUT OTHER AVATARS IN THE ONLINE VIRTUAL WORLD SECOND LIFE

    Get PDF
    Interactivity research lacks consensus regarding the qualities and consequences of interactive experiences. Empirical proof is needed to substantiate the numerous interactivity theories and provide direction for new media technology developers. Specifically, there is a shortage of research on differences between user experiences of interactivity when technology enables communication versus when it does not. In addition, interactivity research is often confounded by the construct of presence. This study’s objectives included: 1) identifying qualities associated with interactive experiences; 2) disambiguating the constructs of interactivity and presence; and 3) developing a measure of perceived interactivity for VW research. The experimental design measured perceived interactivity and presence following completion of a simple task in the online Virtual World (VW) known as Second Life. It was hypothesized that both perceived interactivity and presence would be greater for subjects encountering avatars believed to be controlled by other people than for subjects encountering no other avatars in the VW. A total of 180 subjects from the University of Kentucky participated in a 2 by 4 factorial experiment. Perceived interactivity was measured by modifying McMillan and Hwang’s Measure of Perceived Interactivity for the VW context. Two essential qualities of interactive experiences were identified: Responsiveness and engagement. These qualities are characteristic of unmediated, FTF conversation, which was perceived as the most interactive communication context above technologies routinely described as interactive. Decreased responsiveness of technology at a second study venue caused significant decline in perceived interactivity, demonstrating the importance of a technology’s reaction speed and control provided to the user. Significant main effects for perceived interactivity due to encountering other avatars were confounded by interaction effects due to differences in technology responsiveness. Interactivity and presence appear to be separate psychological constructs which covary in the context of a new media experience. Implications and directions for future research are discussed

    Distortion of depth perception in a virtual environment application

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (leaves 119-130).by Jonathan D. Pfautz.M.Eng
    • 

    corecore