12,916 research outputs found
Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments
Background
When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a) whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b) whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other.
Principal Findings
In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose). Participants âpassedâ (gave up describing) significantly more words when they were talking to a static avatar (no nonverbal feedback available). In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world.
Conclusions
Taken together, the studies show how (a) virtual reality can be used to systematically study the influence of body gestures; (b) it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant); and (c) there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation
Experimenting with subjectivity instantiated into gestures âMy soul at my fingertipsâ : My Avatar and Me
Recalling a sentence by the famous French pianist and pedagogue Marie JaĂ«ll-Trautmann who said, âI will play with my soul at my fingertipsâ, I reflected on the role of performing gestures as an objectifiable aspect of the performerâs subjectivity. Music performance implies a union between soul/mind and body which instantiates the sound message as a felt experience. The energetic tension produced by the âsoulâ in the creative act of performing is spread at first throughout the entire body to reach finally the âfingertipsâ. Focusing on the primary role of the body in performance as a potential vehicle to capture subjectivism, I asked myself:
What would happen if I visualize my performing body by projecting it into a âbody imageâ, an avatar? Can I use this pragmatic avatar of my-self to look at my subjectivity from an outsiderâs perspective? Can I identify my lived experience as instantiated in my gestures through the lens of self-objectification based on empirical data?
The assumptions concerning the body as an interface of the performerâs intentionality in terms of sound producing and expressivity inspired me to investigate on my subjectivity through a âperformative experimentâ combining self-reflections and objective measurements of my bodily expressions during a performance. My performative presentation wants to bring alive the dialectic relation between subjectivity and objectivity in piano performance practice by alternating my live piano performance with interactive reflections on my captured avatar projected on a screen as an objectification of my-self. From this specific angle, my presentation aims at experimenting with subjectivity the creative act of performing instantiated into gestures as a tactile sensation expressing the soulâs intentions. This approach constitutes a connection between gestures and intentionality in music performance and a mediation between scientific understanding and artistic concern
Toward "Pseudo-Haptic Avatars": Modifying the Visual Animation of Self-Avatar Can Simulate the Perception of Weight Lifting
International audienceIn this paper we study how the visual animation of a self-avatar can be artificially modified in real-time in order to generate different haptic perceptions. In our experimental setup, participants could watch their self-avatar in a virtual environment in mirror mode while performing a weight lifting task. Users could map their gestures on the self-animated avatar in real-time using a Kinect. We introduce three kinds of modification of the visual animation of the self-avatar according to the effort delivered by the virtual avatar: 1) changes on the spatial mapping between the user's gestures and the avatar, 2) different motion profiles of the animation, and 3) changes in the posture of the avatar (upper-body inclination). The experimental task consisted of a weight lifting task in which participants had to order four virtual dumbbells according to their virtual weight. The user had to lift each virtual dumbbells by means of a tangible stick, the animation of the avatar was modulated according to the virtual weight of the dumbbell. The results showed that the altering the spatial mapping delivered the best performance. Nevertheless, participants globally appreciated all the different visual effects. Our results pave the way to the exploitation of such novel techniques in various VR applications such as sport training, exercise games, or industrial training scenarios in single or collaborative mode
Exploring heritage through time and space : Supporting community reflection on the highland clearances
On the two hundredth anniversary of the Kildonan clearances, when people were forcibly removed from their homes, the Timespan Heritage centre has created a program of community centred work aimed at challenging pre conceptions and encouraging reflection on this important historical process. This paper explores the innovative ways in which virtual world technology has facilitated community engagement, enhanced visualisation and encouraged reflection as part of this program. An installation where users navigate through a reconstruction of pre clearance Caen township is controlled through natural gestures and presented on a 300 inch six megapixel screen. This environment allows users to experience the past in new ways. The platform has value as an effective way for an educator, artist or hobbyist to create large scale virtual environments using off the shelf hardware and open source software. The result is an exhibit that also serves as a platform for experimentation into innovative ways of community co-creation and co-curation.Postprin
Natural User Interface for Education in Virtual Environments
Education and self-improvement are key features of human behavior. However, learning in the physical world is not always desirable or achievable. That is how simulators came to be. There are domains where purely virtual simulators can be created in contrast to physical ones. In this research we present a novel environment for learning, using a natural user interface. We, humans, are not designed to operate and manipulate objects via keyboard, mouse or a controller. The natural way of interaction and communication is achieved through our actuators (hands and feet) and our sensors (hearing, vision, touch, smell and taste). That is the reason why it makes more sense to use sensors that can track our skeletal movements, are able to estimate our pose, and interpret our gestures. After acquiring and processing the desired â natural input, a system can analyze and translate those gestures into movement signals
Recommended from our members
Generation of multi-modal dialogue for a net environment
In this paper an architecture and special purpose markup language for simulated affective face-to-face communication is presented. In systems based on this architecture, users will be able to watch embodied conversational agents interact with each other in virtual locations on the internet. The markup language, or Rich Representation Language (RRL), has been designed to provide an integrated representation of speech, gesture, posture and facial animation
Refining personal and social presence in virtual meetings
Virtual worlds show promise for conducting meetings and conferences without the need for physical travel. Current experience suggests the major limitation to the more widespread adoption and acceptance of virtual conferences is the failure of existing environments to provide a sense of immersion and engagement, or of âbeing thereâ. These limitations are largely related to the appearance and control of avatars, and to the absence of means to convey non-verbal cues of facial expression and body language. This paper reports on a study involving the use of a mass-market motion sensor (Kinectâą) and the mapping of participant action in the real world to avatar behaviour in the virtual world. This is coupled with full-motion video representation of participantâs faces on their avatars to resolve both identity and facial expression issues. The outcomes of a small-group trial meeting based on this technology show a very positive reaction from participants, and the potential for further exploration of these concepts
- âŠ