3 research outputs found

    Report on experiment supported by the Visionair project: Effect of Visual and Auditory Feedback Modulation on Embodiment and Emotional State in VR

    No full text
    Immersive virtual reality is a perfect medium to study complex human behaviour by simulating an environment close to the real world and yet controlling its properties. In this project we modulate auditory and visual feedback in VR to investigate the impact of the modulation on human behaviour and the emotional state of the participant. The auditory aspect of the feedback modulation consist of changing the sound frequency of the footsteps played back to the user. The visual aspect of the feedback modulation involves changing the gait motion pattern of the self-animated avatar. According to our hypothesis, higher sound frequency of the steps and/or changing human gait towards a happier, more energetic walk will have an effect on the participant's motion trajectories when walking in place in front of a virtual mirror. The reported degree of embodiment of the avatar in the virtual environment, the degree of the illusion of presence, their own body perception and emotional state could, according to our hypothesis, also be influenced by the manipulation. Potentially, this VR setup can be used as a mild positive emotion induction technique. It can be also used as exercise encouragement for people who are otherwise reluctant to do sports

    The Role of Avatar Fidelity and Sex on Self-Motion Recognition

    No full text
    Avatars are important for games and immersive social media applications. Although avatars are still not complete digital copies of the user, they often aim to represent a user in terms of appearance (color and shape) and motion. Previous studies have shown that humans can recognize their own motions in point-light displays. Here, we investigated whether recognition of self-motion is dependent on the avatar's fidelity and the congruency of the avatar's sex with that of the participants. Participants performed different actions that were captured and subsequently remapped onto three different body representations: a point-light figure, a male, and a female virtual avatar. In the experiment, participants viewed the motions displayed on the three body representations and responded to whether the motion was their own. Our results show that there was no influence of body representation on self-motion recognition performance, participants were equally sensitive to recognize their own motion on the point-light figure and the virtual characters. In line with previous research, recognition performance was dependent on the action. Sensitivity was highest for uncommon actions, such as dancing and playing ping-pong, and was around chance level for running, suggesting that the degree of individuality of performing certain actions affects self-motion recognition performance. Our results show that people were able to recognize their own motions even when individual body shape cues were completely eliminated and when the avatar's sex differed from own. This suggests that people might rely more on kinematic information rather than shape and sex cues for recognizing own motion. This finding has important implications for avatar design in game and immersive social media applications
    corecore