952 research outputs found

    Timing and correction of stepping movements with a virtual reality avatar

    Get PDF
    Research into the ability to coordinate one’s movements with external cues has focussed on the use of simple rhythmic, auditory and visual stimuli, or interpersonal coordination with another person. Coordinating movements with a virtual avatar has not been explored, in the context of responses to temporal cues. To determine whether cueing of movements using a virtual avatar is effective, people’s ability to accurately coordinate with the stimuli needs to be investigated. Here we focus on temporal cues, as we know from timing studies that visual cues can be difficult to follow in the timing context. Real stepping movements were mapped onto an avatar using motion capture data. Healthy participants were then motion captured whilst stepping in time with the avatar’s movements, as viewed through a virtual reality headset. The timing of one of the avatar step cycles was accelerated or decelerated by 15% to create a temporal perturbation, for which participants would need to correct to, in order to remain in time. Step onset times of participants relative to the corresponding step-onsets of the avatar were used to measure the timing errors (asynchronies) between them. Participants completed either a visual-only condition, or auditory-visual with footstep sounds included, at two stepping tempo conditions (Fast: 400ms interval, Slow: 800ms interval). Participants’ asynchronies exhibited slow drift in the Visual-Only condition, but became stable in the Auditory-Visual condition. Moreover, we observed a clear corrective response to the phase perturbation in both the fast and slow tempo auditory-visual conditions. We conclude that an avatar’s movements can be used to influence a person’s own motion, but should include relevant auditory cues congruent with the movement to ensure a suitable level of entrainment is achieved. This approach has applications in physiotherapy, where virtual avatars present an opportunity to provide the guidance to assist patients in adhering to prescribed exercises

    Investigating Bodily Awareness in Adults and Children Using Virtual Reality

    Get PDF
    In the Full Body Illusion, adults can embody a virtual body when provided with certain cues. By manipulating variables such as the movement and appearance of a virtual body, experimenters can identify cues needed to build a stable sense of bodily awareness. Such paradigms are also used with children, to build a picture of how bodily awareness changes with development. Here, we used virtual reality and motion capture to provide participants with a first-person perspective of a virtual body to investigate bodily awareness in adults and children. In Experiment 1 we showed adults a virtual body for 5, 30, or 55 seconds, during which it moved synchronously or asynchronously with their movements, or remained static. After 5s, participants reported experiencing embodiment even when movement was asynchronous. These ratings decreased with further exposure to asynchronous movement but remained high in synchronous and no movement conditions, suggesting that adults embody an avatar seen from a first-person perspective by default. In Experiment 2, adults and children viewed bodies which were either 50% or 100% of their own body size. Both groups perceived the virtual environment to have changed size as opposed to their own body, with the exception of children perceiving their body to have grown in the ‘large’ body condition. Therefore body-relative size perception is roughly adult-like from the age of five, with slightly more tolerance to body growth. In Experiment 3, we piloted the use of skin conductance as a measure of embodiment in a group of adults, in response to a ‘child-friendly threat’. Unlike self-reported embodiment, skin conductance did not differ between visuomotor synchrony conditions. Further work is needed to apply psychophysiological measures of embodiment to children. Overall, the work described here contributes to understanding of bodily awareness across ages, as well as having practical applications in virtual reality design

    My body until proven otherwise: Exploring the time course of the full body illusion

    Get PDF
    Evidence from the Full Body Illusion (FBI) has shown that adults can embody full bodies which are not their own when they move synchronously with their own body or are viewed from a first-person perspective. However, there is currently no consensus regarding the time course of the illusion. Here, for the first time, we examined the effect of visuomotor synchrony (synchronous/asynchronous/no movement) on the FBI over time. Surprisingly, we found evidence of embodiment over a virtual body after five seconds in all conditions. Embodiment decreased with increased exposure to asynchronous movement, but remained high in synchronous and no movement conditions. We suggest that embodiment of a body seen from a first-person perspective is felt by default, and that embodiment can then be lost in the face of contradictory cues. These results have significant implications for our understanding of how multisensory cues contribute to embodiment

    Drift and ownership toward a distant virtual body.

    Get PDF
    In body ownership illusions participants feel that a mannequin or virtual body (VB) is their own. Earlier results suggest that body ownership over a body seen from behind in extra personal space is possible when the surrogate body is visually stroked and tapped on its back, while spatially and temporal synchronous tactile stimulation is applied to the participant's back. This result has been disputed with the claim that the results can be explained by self-recognition rather than somatic body ownership. We carried out an experiment with 30 participants in a between-groups design. They all saw the back of a VB 1.2 m in front, that moved in real-time determined by upper body motion capture. All felt tactile stimulation on their back, and for 15 of them this was spatially and temporally synchronous with stimulation that they saw on the back of the VB, but asynchronous for the other 15. After 3 min a revolving fan above the VB descended and stopped at the position of the VB neck. A questionnaire assessed referral of touch to the VB, body ownership, the illusion of drifting forwards toward the VB, and the VB drifting backwards. Heart rate deceleration (HRD) and the amount of head movement during the threat period were used to assess the response to the threat from the fan. Results showed that although referral of touch was significantly greater in the synchronous condition than the asynchronous, there were no other differences between the conditions. However, a further multivariate analysis revealed that in the visuotactile synchronous condition HRD and head movement increased with the illusion of forward drift and decreased with backwards drift. Body ownership contributed positively to these drift sensations. Our conclusion is that the setup results in a contradiction-somatic feelings associated with a distant body-that the brain attempts to resolve by generating drift illusions that would make the two bodies coincide

    Real Virtuality: A Code of Ethical Conduct. Recommendations for Good Scientific Practice and the Consumers of VR-Technology

    Get PDF
    The goal of this article is to present a first list of ethical concerns that may arise from research and personal use of virtual reality (VR) and related technology, and to offer concrete recommendations for minimizing those risks. Many of the recommendations call for focused research initiatives. In the first part of the article, we discuss the relevant evidence from psychology that motivates our concerns. In Section “Plasticity in the Human Mind,” we cover some of the main results suggesting that one’s environment can influence one’s psychological states, as well as recent work on inducing illusions of embodiment. Then, in Section “Illusions of Embodiment and Their Lasting Effect,” we go on to discuss recent evidence indicating that immersion in VR can have psychological effects that last after leaving the virtual environment. In the second part of the article, we turn to the risks and recommendations. We begin, in Section “The Research Ethics of VR,” with the research ethics of VR, covering six main topics: the limits of experimental environments, informed consent, clinical risks, dual-use, online research, and a general point about the limitations of a code of conduct for research. Then, in Section “Risks for Individuals and Society,” we turn to the risks of VR for the general public, covering four main topics: long-term immersion, neglect of the social and physical environment, risky content, and privacy. We offer concrete recommendations for each of these 10 topics, summarized in Table 1

    Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter

    Get PDF
    Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology include embodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell you insurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design of a 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntary movements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions (slides, drawings, and paintings) at which the presenter can point. This article is part of a special issue on interactive entertainment

    Visual cues in musical synchronisation

    Get PDF
    Although music performance is generally thought of as an auditory activity in the Western tradition, the presence of continuous visual information in live music contributes to the cohesiveness of music ensembles, which presents an interesting psychological phenomenon in which audio and visual cues are presumably integrated. In order to investigate how auditory and visual sensory information are combined in the basic process of synchronising movements with music, this thesis focuses on both musicians and nonmusicians as they respond to two sources of visual information common to ensembles: the conductor, and the ancillary movements (movements that do not directly create sound; e.g. body sway or head nods) of co-performers. These visual cues were hypothesized to improve the timing of intentional synchronous action (matching a musical pulse), as well as increasing the synchrony of emergent ancillary movements between participant and stimulus. The visual cues were tested in controlled renderings of ensemble music arrangements, and were derived from real, biological motion. All three experiments employed the same basic synchronisation task: participants drummed along to the pulse of tempo-changing music while observing various visual cues. For each experiment, participants’ drum timing and upper-body movements were recorded as they completed the synchronisation task. The analyses used to quantify drum timing and ancillary movements came from theoretical approaches to movement timing and entrainment: information processing and dynamical systems. Overall, this thesis shows that basic musical timing is a common ability that is facilitated by visual cues in certain contexts, and that emergent ancillary movements and intentional synchronous movements in combination may best explain musical timing and synchronisation

    The Rocketbox Library and the Utility of Freely Available Rigged Avatars

    Get PDF
    As part of the open sourcing of the Microsoft Rocketbox avatar library for research and academic purposes, here we discuss the importance of rigged avatars for the Virtual and Augmented Reality (VR, AR) research community. Avatars, virtual representations of humans, are widely used in VR applications. Furthermore many research areas ranging from crowd simulation to neuroscience, psychology, or sociology have used avatars to investigate new theories or to demonstrate how they influence human performance and interactions. We divide this paper in two main parts: the first one gives an overview of the different methods available to create and animate avatars. We cover the current main alternatives for face and body animation as well introduce upcoming capture methods. The second part presents the scientific evidence of the utility of using rigged avatars for embodiment but also for applications such as crowd simulation and entertainment. All in all this paper attempts to convey why rigged avatars will be key to the future of VR and its wide adoption

    Visual Fidelity Effects on Expressive Self-avatar in Virtual Reality: First Impressions Matter

    Get PDF
    Owning a virtual body inside Virtual Reality (VR) offers a unique experience where, typically, users are able to control their self- avatar’s body via tracked VR controllers. However, controlling a self-avatar’s facial movements is harder due to the HMD being in the way for tracking. In this work we present (1) the technical pipeline of creating and rigging self-alike avatars, whose facial expressions can be then controlled by users wearing the VIVE Pro Eye and VIVE Facial Tracker, and (2) based on this setting, two within-group studies on the psychological impact of the appearance realism of self- avatars, both the level of photorealism and self-likeness. Participants were told to practise their presentation, in front of a mirror, in the body of a realistic looking avatar and a cartoon like one, both animated with body and facial mocap data. In study 1 we made two bespoke self-alike avatars for each participant and we found that although participants found the cartoon-like character more attractive, they reported higher Body Ownership with whichever the avatar they had in the first trial. In study 2 we used generic avatars with higher fidelity facial animation, and found a similar “first trial effect” where they reported the avatar from their first trial being less creepy. Our results also suggested participants found the facial expressions easier to control with the cartoon-like character. Further, our eye-tracking data suggested that although participants were mainly facing their avatar during their presentation, their eye- gaze were focused elsewhere half of the time
    corecore