929 research outputs found

    Enhanced waters 2D muscle model for facial expression generation

    Get PDF
    In this paper we present an improved Waters facial model used as an avatar for work published in (Kumar and Vanualailai, 2016), which described a Facial Animation System driven by the Facial Action Coding System (FACS) in a low-bandwidth video streaming setting. FACS defines 32 single Action Units (AUs) which are generated by an underlying muscle action that interact in different ways to create facial expressions. Because FACS AU describes atomic facial distortions using facial muscles, a face model that can allow AU mappings to be applied directly on the respective muscles is desirable. Hence for this task we choose the Waters anatomy-based face model due to its simplicity and implementation of pseudo muscles. However Waters face model is limited in its ability to create realistic expressions mainly the lack of a function to represent sheet muscles, unrealistic jaw rotation function and improper implementation of sphincter muscles. Therefore in this work we provide enhancements to the Waters facial model by improving its UI, adding sheet muscles, providing an alternative implementation to the jaw rotation function, presenting a new sphincter muscle model that can be used around the eyes and changes to operation of the sphincter muscle used around the mouth

    Towards predicting biomechanical consequences of jaw reconstruction

    Get PDF
    Abstract — We are developing dynamic computer models of surgical jaw reconstructions in order to determine the effect of altered musculoskeletal structure on the biomechanics of mastication. We aim to predict post-reconstruction deficits in jaw motion and force production. To support these research goals we have extended our biomechanics simulation toolkit, ArtiSynth [1], with new methods relevant to surgical planning. The principle features of ArtiSynth include simulation of constrained rigid-bodies, volume-preserving finite-element methods for deformable bodies, contact between bodies, and muscle models. We are adding model editing capabilities and muscle activation optimization to facilitate progress on postsurgical simulation. Our software and research directions are focused on upper-airway and cranio-facial anatomy, however the toolset and methodology are applicable to other musculoskeletal systems. I

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    Accurate and Interpretable Solution of the Inverse Rig for Realistic Blendshape Models with Quadratic Corrective Terms

    Full text link
    We propose a new model-based algorithm solving the inverse rig problem in facial animation retargeting, exhibiting higher accuracy of the fit and sparser, more interpretable weight vector compared to SOTA. The proposed method targets a specific subdomain of human face animation - highly-realistic blendshape models used in the production of movies and video games. In this paper, we formulate an optimization problem that takes into account all the requirements of targeted models. Our objective goes beyond a linear blendshape model and employs the quadratic corrective terms necessary for correctly fitting fine details of the mesh. We show that the solution to the proposed problem yields highly accurate mesh reconstruction even when general-purpose solvers, like SQP, are used. The results obtained using SQP are highly accurate in the mesh space but do not exhibit favorable qualities in terms of weight sparsity and smoothness, and for this reason, we further propose a novel algorithm relying on a MM technique. The algorithm is specifically suited for solving the proposed objective, yielding a high-accuracy mesh fit while respecting the constraints and producing a sparse and smooth set of weights easy to manipulate and interpret by artists. Our algorithm is benchmarked with SOTA approaches, and shows an overall superiority of the results, yielding a smooth animation reconstruction with a relative improvement up to 45 percent in root mean squared mesh error while keeping the cardinality comparable with benchmark methods. This paper gives a comprehensive set of evaluation metrics that cover different aspects of the solution, including mesh accuracy, sparsity of the weights, and smoothness of the animation curves, as well as the appearance of the produced animation, which human experts evaluated

    Psychophysical investigation of facial expressions using computer animated faces

    No full text
    The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness

    High-fidelity Interpretable Inverse Rig: An Accurate and Sparse Solution Optimizing the Quartic Blendshape Model

    Full text link
    We propose a method to fit arbitrarily accurate blendshape rig models by solving the inverse rig problem in realistic human face animation. The method considers blendshape models with different levels of added corrections and solves the regularized least-squares problem using coordinate descent, i.e., iteratively estimating blendshape weights. Besides making the optimization easier to solve, this approach ensures that mutually exclusive controllers will not be activated simultaneously and improves the goodness of fit after each iteration. We show experimentally that the proposed method yields solutions with mesh error comparable to or lower than the state-of-the-art approaches while significantly reducing the cardinality of the weight vector (over 20 percent), hence giving a high-fidelity reconstruction of the reference expression that is easier to manipulate in the post-production manually. Python scripts for the algorithm will be publicly available upon acceptance of the paper

    Performance Driven Facial Animation with Blendshapes

    Get PDF
    corecore