2,128 research outputs found

    CGAMES'2009

    Get PDF

    Photorealistic retrieval of occluded facial information using a performance-driven face model

    Get PDF
    Facial occlusions can cause both human observers and computer algorithms to fail in a variety of important tasks such as facial action analysis and expression classification. This is because the missing information is not reconstructed accurately enough for the purpose of the task in hand. Most current computer methods that are used to tackle this problem implement complex three-dimensional polygonal face models that are generally timeconsuming to produce and unsuitable for photorealistic reconstruction of missing facial features and behaviour. In this thesis, an image-based approach is adopted to solve the occlusion problem. A dynamic computer model of the face is used to retrieve the occluded facial information from the driver faces. The model consists of a set of orthogonal basis actions obtained by application of principal component analysis (PCA) on image changes and motion fields extracted from a sequence of natural facial motion (Cowe 2003). Examples of occlusion affected facial behaviour can then be projected onto the model to compute coefficients of the basis actions and thus produce photorealistic performance-driven animations. Visual inspection shows that the PCA face model recovers aspects of expressions in those areas occluded in the driver sequence, but the expression is generally muted. To further investigate this finding, a database of test sequences affected by a considerable set of artificial and natural occlusions is created. A number of suitable metrics is developed to measure the accuracy of the reconstructions. Regions of the face that are most important for performance-driven mimicry and that seem to carry the best information about global facial configurations are revealed using Bubbles, thus in effect identifying facial areas that are most sensitive to occlusions. Recovery of occluded facial information is enhanced by applying an appropriate scaling factor to the respective coefficients of the basis actions obtained by PCA. This method improves the reconstruction of the facial actions emanating from the occluded areas of the face. However, due to the fact that PCA produces bases that encode composite, correlated actions, such an enhancement also tends to affect actions in non-occluded areas of the face. To avoid this, more localised controls for facial actions are produced using independent component analysis (ICA). Simple projection of the data onto an ICA model is not viable due to the non-orthogonality of the extracted bases. Thus occlusion-affected mimicry is first generated using the PCA model and then enhanced by accordingly manipulating the independent components that are subsequently extracted from the mimicry. This combination of methods yields significant improvements and results in photorealistic reconstructions of occluded facial actions

    Actor & Avatar: A Scientific and Artistic Catalog

    Get PDF
    What kind of relationship do we have with artificial beings (avatars, puppets, robots, etc.)? What does it mean to mirror ourselves in them, to perform them or to play trial identity games with them? Actor & Avatar addresses these questions from artistic and scholarly angles. Contributions on the making of "technical others" and philosophical reflections on artificial alterity are flanked by neuroscientific studies on different ways of perceiving living persons and artificial counterparts. The contributors have achieved a successful artistic-scientific collaboration with extensive visual material

    An Examination of a Theory of Embodied Social Presence in Virtual Worlds

    Get PDF
    In this article, we discuss and empirically examine the importance of embodiment, context, and spatial proximity as they pertain to collaborative interaction and task completion in virtual environments. Specifically, we introduce the embodied social presence (ESP) theory as a framework to account for a higher level of perceptual engagement that users experience as they engage in activity-based social interaction in virtual environments. The ESP theory builds on the analysis of reflection data from Second Life users to explain the process by which perceptions of ESP are realized. We proceed to describe implications of ESP for collaboration and other organizational functions

    Attention and Social Cognition in Virtual Reality:The effect of engagement mode and character eye-gaze

    Get PDF
    Technical developments in virtual humans are manifest in modern character design. Specifically, eye gaze offers a significant aspect of such design. There is need to consider the contribution of participant control of engagement. In the current study, we manipulated participants’ engagement with an interactive virtual reality narrative called Coffee without Words. Participants sat over coffee opposite a character in a virtual cafĂ©, where they waited for their bus to be repaired. We manipulated character eye-contact with the participant. For half the participants in each condition, the character made no eye-contact for the duration of the story. For the other half, the character responded to participant eye-gaze by making and holding eye contact in return. To explore how participant engagement interacted with this manipulation, half the participants in each condition were instructed to appraise their experience as an artefact (i.e., drawing attention to technical features), while the other half were introduced to the fictional character, the narrative, and the setting as though they were real. This study allowed us to explore the contributions of character features (interactivity through eye-gaze) and cognition (attention/engagement) to the participants’ perception of realism, feelings of presence, time duration, and the extent to which they engaged with the character and represented their mental states (Theory of Mind). Importantly it does so using a highly controlled yet ecologically valid virtual experience

    Investigating VTubing as a Reconstruction of Streamer Self-Presentation: Identity, Performance, and Gender

    Full text link
    VTubers, or Virtual YouTubers, are live streamers who create streaming content using animated 2D or 3D virtual avatars. In recent years, there has been a significant increase in the number of VTuber creators and viewers across the globe. This practise has drawn research attention into topics such as viewers' engagement behaviors and perceptions, however, as animated avatars offer more identity and performance flexibility than traditional live streaming where one uses their own body, little research has focused on how this flexibility influences how creators present themselves. This research thus seeks to fill this gap by presenting results from a qualitative study of 16 Chinese-speaking VTubers' streaming practices. The data revealed that the virtual avatars that were used while live streaming afforded creators opportunities to present themselves using inflated presentations and resulted in inclusive interactions with viewers. The results also unveiled the inflated, and often sexualized, gender expressions of VTubers while they were situated in misogynistic environments. The socio-technical facets of VTubing were found to potentially reduce sexual harassment and sexism, whilst also raising self-objectification concerns.Comment: Under review at ACM CSCW after a Major Revisio

    Virtual Aesthetics and Ethical Communication: Towards Virtuous Reality Design

    Get PDF
    This thesis argues that ethics can and should be applied to Second Life avatar design and behavior. Second Life is a unique virtual reality due to its connection to the physical world primarily through financial devices. Users buy and sell virtual and physical goods over these networks; the avatar, it is argued, is the primary instrument for persuasion in these contexts. Avatars facilitate a virtual aesthetic that is primarily \u27natural.\u27 By creating aesthetic avatars, the developers of Second Life enable audiences to affectively associate with other \u27residents.\u27 Not only is the avatar designed for aesthetic appeal, but it enables users to move, act, and interact in an online environment--to vicariously experience the emotions that accompany those actions. In the real world, individuals\u27 actions have ethical consequences. Behavior in Second Life, it is argued, should be subject to ethics as determined by democratic communities of users

    Investigating How Speech And Animation Realism Influence The Perceived Personality Of Virtual Characters And Agents

    Get PDF
    The portrayed personality of virtual characters and agents is understood to influence how we perceive and engage with digital applications. Understanding how the features of speech and animation drive portrayed personality allows us to intentionally design characters to be more personalized and engaging. In this study, we use performance capture data of unscripted conversations from a variety of actors to explore the perceptual outcomes associated with the modalities of speech and motion. Specifically, we contrast full performance-driven characters to those portrayed by generated gestures and synthesized speech, analysing how the features of each influence portrayed personality according to the Big Five personality traits. We find that processing speech and motion can have mixed effects on such traits, with our results highlighting motion as the dominant modality for portraying extraversion and speech as dominant for communicating agreeableness and emotional stability. Our results can support the Extended Reality (XR) community in development of virtual characters, social agents and 3D User Interface (3DUI) agents portraying a range of targeted personalities

    Machinima And Video-based Soft Skills Training

    Get PDF
    Multimedia training methods have traditionally relied heavily on video based technologies and significant research has shown these to be very effective training tools. However production of video is time and resource intensive. Machinima (pronounced \u27muh-sheen-eh-mah\u27) technologies are based on video gaming technology. Machinima technology allows video game technology to be manipulated into unique scenarios based on entertainment or training and practice applications. Machinima is the converting of these unique scenarios into video vignettes that tell a story. These vignettes can be interconnected with branching points in much the same way that education videos are interconnected as vignettes between decision points. This study addressed the effectiveness of machinima based soft-skills education using avatar actors versus the traditional video teaching application using human actors. This research also investigated the difference between presence reactions when using avatar actor produced video vignettes as compared to human actor produced video vignettes. Results indicated that the difference in training and/or practice effectiveness is statistically insignificant for presence, interactivity, quality and the skill of assertiveness. The skill of active listening presented a mixed result indicating the need for careful attention to detail in situations where body language and facial expressions are critical to communication. This study demonstrates that a significant opportunity exists for the exploitation of avatar actors in video based instruction
    • 

    corecore