7,754 research outputs found

    Visual Fidelity Effects on Expressive Self-avatar in Virtual Reality: First Impressions Matter

    Get PDF
    Owning a virtual body inside Virtual Reality (VR) offers a unique experience where, typically, users are able to control their self- avatar’s body via tracked VR controllers. However, controlling a self-avatar’s facial movements is harder due to the HMD being in the way for tracking. In this work we present (1) the technical pipeline of creating and rigging self-alike avatars, whose facial expressions can be then controlled by users wearing the VIVE Pro Eye and VIVE Facial Tracker, and (2) based on this setting, two within-group studies on the psychological impact of the appearance realism of self- avatars, both the level of photorealism and self-likeness. Participants were told to practise their presentation, in front of a mirror, in the body of a realistic looking avatar and a cartoon like one, both animated with body and facial mocap data. In study 1 we made two bespoke self-alike avatars for each participant and we found that although participants found the cartoon-like character more attractive, they reported higher Body Ownership with whichever the avatar they had in the first trial. In study 2 we used generic avatars with higher fidelity facial animation, and found a similar “first trial effect” where they reported the avatar from their first trial being less creepy. Our results also suggested participants found the facial expressions easier to control with the cartoon-like character. Further, our eye-tracking data suggested that although participants were mainly facing their avatar during their presentation, their eye- gaze were focused elsewhere half of the time

    Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification

    Get PDF
    Through avatar embodiment in Virtual Reality (VR) we can achieve the illusion that an avatar is substituting our body: the avatar moves as we move and we see it from a first person perspective. However, self-identification, the process of identifying a representation as being oneself, poses new challenges because a key determinant is that we see and have agency in our own face. Providing control over the face is hard with current HMD technologies because face tracking is either cumbersome or error prone. However, limited animation is easily achieved based on speaking. We investigate the level of avatar enfacement, that is believing that a picture of a face is one's own face, with three levels of facial animation: (i) one in which the facial expressions of the avatars are static, (ii) one in which we implement lip-sync motion and (iii) one in which the avatar presents lip-sync plus additional facial animations, with blinks, designed by a professional animator. We measure self-identification using a face morphing tool that morphs from the face of the participant to the face of a gender matched avatar. We find that self-identification on avatars can be increased through pre-baked animations even when these are not photorealistic nor look like the participant

    Virtual faces as a tool to study emotion recognition deficits in schizophrenia

    Full text link
    Studies investigating emotion recognition in patients with schizophrenia predominantly presented photographs of facial expressions. Better control and higher flexibility of emotion displays could be afforded by virtual reality (VR). VR allows the manipulation of facial expression and can simulate social interactions in a controlled and yet more naturalistic environment. However, to our knowledge, there is no study that systematically investigated whether patients with schizophrenia show the same emotion recognition deficits when emotions are expressed by virtual as compared to natural faces. Twenty schizophrenia patients and 20 controls rated pictures of natural and virtual faces with respect to the basic emotion expressed (happiness, sadness, anger, fear, disgust, and neutrality). Consistent with our hypothesis, the results revealed that emotion recognition impairments also emerged for emotions expressed by virtual characters. As virtual in contrast to natural expressions only contain major emotional features, schizophrenia patients already seem to be impaired in the recognition of basic emotional features. This finding has practical implication as it supports the use of virtual emotional expressions for psychiatric research: the ease of changing facial features, animating avatar faces, and creating therapeutic simulations makes validated artificial expressions perfectly suited to study and treat emotion recognition deficits in schizophrenia

    Virtual Avatar for Emotion Recognition in Patients with Schizophrenia: A Pilot Study

    Get PDF
    Producción CientíficaPersons who suffer from schizophrenia have difficulties in recognizing emotions in others’ facial expressions, which affects their capabilities for social interaction and hinders their social integration. Photographic images have traditionally been used to explore emotion recognition impairments in schizophrenia patients, but they lack of the dynamism that is inherent to facial expressiveness. In order to overcome those inconveniences, over the last years different authors have proposed the use of virtual avatars. In this work, we present the results of a pilot study that explored the possibilities of using a realisticlooking avatar for the assessment of emotion recognition deficits in patients who suffer from schizophrenia. In the study, 20 subjects with schizophrenia of long evolution and 20 control subjects were invited to recognize a set of facial expressions of emotions showed by both the said virtual avatar and static images. Our results show that schizophrenic patients exhibit recognition deficits in emotion recognition from facial expressions regardless the type of stimuli (avatar or images), and that those deficits are related with the psychopathology. Finally, some improvements in recognition rates (RRs) for the patient group when using the avatar were observed for sadness or surprise expressions, and they even outperform the control group in the recognition of the happiness expression. This leads to conclude that, apart from the dynamism of the shown expression, the RRs for schizophrenia patients when employing animated avatars may depend on other factors which need to be further explored.Junta de Castilla y León (Programa de apoyo a proyectos de investigación-Ref. VA036U14)Junta de Castilla y León (Programa de apoyo a proyectos de investigación-Ref. VA013A12-2)Ministerio de Economía, Industria y Competitividad (Grant DPI2014-56500-R

    Preface: Facial and Bodily Expressions for Control and Adaptation of Games

    Get PDF

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    CGAMES'2009

    Get PDF

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents
    corecore