3 research outputs found

    Enhanced waters 2D muscle model for facial expression generation

    Get PDF
    In this paper we present an improved Waters facial model used as an avatar for work published in (Kumar and Vanualailai, 2016), which described a Facial Animation System driven by the Facial Action Coding System (FACS) in a low-bandwidth video streaming setting. FACS defines 32 single Action Units (AUs) which are generated by an underlying muscle action that interact in different ways to create facial expressions. Because FACS AU describes atomic facial distortions using facial muscles, a face model that can allow AU mappings to be applied directly on the respective muscles is desirable. Hence for this task we choose the Waters anatomy-based face model due to its simplicity and implementation of pseudo muscles. However Waters face model is limited in its ability to create realistic expressions mainly the lack of a function to represent sheet muscles, unrealistic jaw rotation function and improper implementation of sphincter muscles. Therefore in this work we provide enhancements to the Waters facial model by improving its UI, adding sheet muscles, providing an alternative implementation to the jaw rotation function, presenting a new sphincter muscle model that can be used around the eyes and changes to operation of the sphincter muscle used around the mouth

    Highly accurate and fully automatic 3D head pose estimation and eye gaze estimation using RGB-D sensors and 3D morphable models

    Get PDF
    The research presented in the paper was funded by grant F506-FSA of the Auto21 Networks of Centers of Excellence Program of Canada.This work addresses the problem of automatic head pose estimation and its application in 3D gaze estimation using low quality RGB-D sensors without any subject cooperation or manual intervention. The previous works on 3D head pose estimation using RGB-D sensors require either an offline step for supervised learning or 3D head model construction, which may require manual intervention or subject cooperation for complete head model reconstruction. In this paper, we propose a 3D pose estimator based on low quality depth data, which is not limited by any of the aforementioned steps. Instead, the proposed technique relies on modeling the subject's face in 3D rather than the complete head, which, in turn, relaxes all of the constraints in the previous works. The proposed method is robust, highly accurate and fully automatic. Moreover, it does not need any offline step. Unlike some of the previous works, the method only uses depth data for pose estimation. The experimental results on the Biwi head pose database confirm the efficiency of our algorithm in handling large pose variations and partial occlusion. We also evaluated the performance of our algorithm on IDIAP database for 3D head pose and eye gaze estimation.Publisher PDFPeer reviewe

    Realtime Performance-Based Facial Avatars for Immersive Gameplay

    Get PDF
    This paper discusses how realtime face tracking and performancebased animation can facilitate new ways of interaction for computer gaming and other online applications. We identify a number of requirements for realtime face capture and animation systems, and show how recent progress in tracking algorithms advances towards satisfying these requirements. Performance-based animation has the potential to significantly enrich the emotional experience of ingame communication through live avatars that are controlled directly by the facial expressions of a recorded user. We briefly outline other potential use scenarios of realtime face tracking systems and conclude with a discussion of future research challenges
    corecore