186 research outputs found

    The Rocketbox Library and the Utility of Freely Available Rigged Avatars

    Get PDF
    As part of the open sourcing of the Microsoft Rocketbox avatar library for research and academic purposes, here we discuss the importance of rigged avatars for the Virtual and Augmented Reality (VR, AR) research community. Avatars, virtual representations of humans, are widely used in VR applications. Furthermore many research areas ranging from crowd simulation to neuroscience, psychology, or sociology have used avatars to investigate new theories or to demonstrate how they influence human performance and interactions. We divide this paper in two main parts: the first one gives an overview of the different methods available to create and animate avatars. We cover the current main alternatives for face and body animation as well introduce upcoming capture methods. The second part presents the scientific evidence of the utility of using rigged avatars for embodiment but also for applications such as crowd simulation and entertainment. All in all this paper attempts to convey why rigged avatars will be key to the future of VR and its wide adoption

    Towards Perception-based Character Animation

    Get PDF

    Combining motion matching and orientation prediction to animate avatars for consumer-grade VR devices

    Get PDF
    The animation of user avatars plays a crucial role in conveying their pose, gestures, and relative distances to virtual objects or other users. Self-avatar animation in immersive VR helps improve the user experience and provides a Sense of Embodiment. However, consumer-grade VR devices typically include at most three trackers, one at the Head Mounted Display (HMD), and two at the handheld VR controllers. Since the problem of reconstructing the user pose from such sparse data is ill-defined, especially for the lower body, the approach adopted by most VR games consists of assuming the body orientation matches that of the HMD, and applying animation blending and time-warping from a reduced set of animations. Unfortunately, this approach produces noticeable mismatches between user and avatar movements. In this work we present a new approach to animate user avatars that is suitable for current mainstream VR devices. First, we use a neural network to estimate the user's body orientation based on the tracking information from the HMD and the hand controllers. Then we use this orientation together with the velocity and rotation of the HMD to build a feature vector that feeds a Motion Matching algorithm. We built a MoCap database with animations of VR users wearing a HMD and used it to test our approach on both self-avatars and other users’ avatars. Our results show that our system can provide a large variety of lower body animations while correctly matching the user orientation, which in turn allows us to represent not only forward movements but also stepping in any direction.This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 860768 (CLIPE project) and the Spanish Ministry of Science and Innovation (PID2021-122136OB-C21).Peer ReviewedPostprint (published version

    An ‘In the Wild’ Experiment on Presence and Embodiment using Consumer Virtual Reality Equipment

    Get PDF
    Consumer virtual reality systems are now becoming widely available. We report on a study on presence and embodiment within virtual reality that was conducted ‘in the wild’, in that data was collected from devices owned by consumers in uncontrolled settings, not in a traditional laboratory setting. Users of Samsung Gear VR and Google Cardboard devices were invited by web pages and email invitation to download and run an app that presented a scenario where the participant would sit in a bar watching a singer. Each participant saw one of eight variations of the scenario: with or without a self-avatar; singer inviting the participant to tap along or not; singer looking at the participant or not. Despite the uncontrolled situation of the experiment, results from an in-app questionnaire showed tentative evidence that a self-avatar had a positive effect on self-report of presence and embodiment, and that the singer inviting the participant to tap along had a negative effect on self-report of embodiment. We discuss the limitations of the study and the platforms, and the potential for future open virtual reality experiments

    Populating 3D Cities: a True Challenge

    Full text link
    In this paper, we describe how we can model crowds in real-time using dynamic meshes, static meshes andimpostors. Techniques to introduce variety in crowds including colors, shapes, textures, individualanimation, individualized path-planning, simple and complex accessories are explained. We also present ahybrid architecture to handle the path planning of thousands of pedestrians in real time, while ensuringdynamic collision avoidance. Several behavioral aspects are presented as gaze control, group behaviour, aswell as the specific technique of crowd patches

    Populating 3D Cities: A True Challenge

    Get PDF
    In this paper, we describe how we can model crowds in real-time using dynamic meshes, static meshes andimpostors. Techniques to introduce variety in crowds including colors, shapes, textures, individualanimation, individualized path-planning, simple and complex accessories are explained. We also present ahybrid architecture to handle the path planning of thousands of pedestrians in real time, while ensuringdynamic collision avoidance. Several behavioral aspects are presented as gaze control, group behaviour, aswell as the specific technique of crowd patches

    The Effect of Anthropometric Properties of Self-Avatars on Action Capabilities in Virtual Reality

    Get PDF
    The field of Virtual Reality (VR) has seen a steady exponential uptake in the last decade and is being continuously incorporated into areas of popular interest like healthcare, training, recreation and gaming. This steady upward trend and prolonged popularity has resulted in numerous extravagant virtual environments, some that aim to mimic real-life experiences like combat training, while others intend to provide unique experiences that may otherwise be difficult to recreate like flying over ancient Egypt as a bird. These experiences often showcase highly realistic graphics, intuitive interactions and unique avatar embodiment scenarios with the help of various tracking sensors, high definition graphic displays, sound systems, etc. The literature suggests that estimates and affordance judgments in VR scenarios such as the ones described above are affected by the properties and the nature of the avatar embodied by the user. Therefore, to provide users with the finest experiences it is crucial to understand the interaction between the embodied self and the action capabilities afforded by it in the surrounding virtual environment. In a series of studies aimed at exploring the effect of gender matched body-scaled self-avatars on the user\u27s perception, we investigate the effect of self-avatars on the perception of size of objects in an immersive virtual environment (IVE) and how this perception affects the actions one can perform as compared to the real world. In the process, we make use of newer tracking technology and graphic displays to investigate the perceived differences between real world environments and their virtual counterparts to understand how the spatial properties of the environment and the embodied self-avatars affect affordances by means of passability judgments. We describe techniques for creation and mapping VR environments onto their real world counterparts and the creation of gender matched body-scaled self-avatars that provides real time full-body tracking. The first two studies investigate how newer graphical displays and off-the-shelf tracking devices can be utilized to create salient gender matched body-scaled self-avatars and their effect on the judgment of passability as a result of the embodied body schema. The study involves creating complex scripts that automate the process of mapping virtual worlds onto their real world counterparts within a 1cm margin of error and the creation of self-avatars that match height, limb proportions and shoulder width of the participant using tracking sensors. The experiment involves making judgments about the passability of an adjustable doorway in the real world and in a virtual to-scale replica of the real world environment. The results demonstrated that the perception of affordances in IVEs is comparable to the real world but the behavior leading to it differs in VR. Also, the body-scaled self-avatars generated provide salient information yielding performance similar to the real world. Several insights and guidelines related to creating veridical virtual environments and realistic self-avatars were achieved from this effort. The third study investigates how the presence of body-scaled self-avatars affects the perception of size of virtual handheld objects and the influence of the person-plus-virtual-object system created by lifting the said virtual object on passability. This is crucial to understand as VR simulations now often utilize self-avatars that carry objects while maneuvering through the environment. How they interact with these handheld objects can influence what they do in critical scenarios where split second decisions can change the outcome like combat training, role-playing games, first person shooting, thrilling rides, physiotherapy, etc. It has also been reported that the avatar itself can influence the perception of size of virtual objects, in turn influencing action capabilities. There is ample research on different interaction techniques to manipulate objects in a virtual world but the question about how the objects affect our action capabilities upon interaction remains unanswered, especially when the haptic feedback associated with holding a real object is mismatched or missing. The study investigates this phenomenon by having participants interact with virtual objects of different sizes and making frontal and lateral passability judgments to an adjustable aperture similar to the first experiment. The results suggest that the presence of self-avatars significantly affects affordance judgments. Interestingly, frontal and lateral judgments in IVEs seem to similar unlike the real world. Investigating the concept of embodied body schema and its influence on action-capabilities further, the fourth study looks at how embodying self-avatars that may vary slightly from your real world body affect performance and behavior in dynamic affordance scenarios. In this particular study, we change the eye height of the participants in the presence or absence of self-avatars that are either bigger, smaller or the same size as the participant. We then investigate how this change in eye height and anthropometric properties of the self-avatar affects their judgments when crossing streets with oncoming traffic in virtual reality. We also evaluate any changes in the perceived walking speed as a result of embodying altered self-avatars. The findings suggest that the presence of self-avatars results in safer crossing behavior, however scaling the eye height or the avatar does not seem to affect the perceived walking speed. A detailed discussion on all the findings can be found in the manuscript
    • …
    corecore