740 research outputs found

    Senescence: An Aging based Character Simulation Framework

    Get PDF
    The \u27Senescence\u27 framework is a character simulation plug-in for Maya that can be used for rigging and skinning muscle deformer based humanoid characters with support for aging. The framework was developed using Python, Maya Embedded Language and PyQt. The main targeted users for this framework are the Character Technical Directors, Technical Artists, Riggers and Animators from the production pipeline of Visual Effects Studios. The characters that were simulated using \u27Senescence\u27 were studied using a survey to understand how well the intended age was perceived by the audience. The results of the survey could not reject one of our null hypotheses which means that the difference in the simulated age groups of the character is not perceived well by the participants. But, there is a difference in the perception of simulated age in the character between an Animator and a Non-Animator. Therefore, the difference in the simulated character\u27s age was perceived by an untrained audience, but the audience was unable to relate it to a specific age group

    FacEMOTE: Qualitative Parametric Modifiers for Facial Animations

    Get PDF
    We propose a control mechanism for facial expressions by applying a few carefully chosen parametric modifications to preexisting expression data streams. This approach applies to any facial animation resource expressed in the general MPEG-4 form, whether taken from a library of preset facial expressions, captured from live performance, or entirely manually created. The MPEG-4 Facial Animation Parameters (FAPs) represent a facial expression as a set of parameterized muscle actions, given as intensity of individual muscle movements over time. Our system varies expressions by changing the intensities and scope of sets of MPEG-4 FAPs. It creates variations in “expressiveness” across the face model rather than simply scale, interpolate, or blend facial mesh node positions. The parameters are adapted from the Effort parameters of Laban Movement Analysis (LMA); we developed a mapping from their values onto sets of FAPs. The FacEMOTE parameters thus perturb a base expression to create a wide range of expressions. Such an approach could allow real-time face animations to change underlying speech or facial expression shapes dynamically according to current agent affect or user interaction needs

    Facial image morphing by self-organizing feature maps

    Get PDF
    [[abstract]]We propose a new facial image morphing algorithm based on the Kohonen self-organizing feature map (SOM) algorithm to generate a smooth 2D transformation that reflects anchor point correspondences. Using only a 2D face image and a small number of anchor points, we show that the proposed morphing algorithm provides a powerful mechanism for processing facial expressions.[[conferencetype]]國際[[conferencedate]]19990710~19990716[[booktype]]紙本[[iscallforpapers]]Y[[conferencelocation]]Washington, DC, US

    Enhanced waters 2D muscle model for facial expression generation

    Get PDF
    In this paper we present an improved Waters facial model used as an avatar for work published in (Kumar and Vanualailai, 2016), which described a Facial Animation System driven by the Facial Action Coding System (FACS) in a low-bandwidth video streaming setting. FACS defines 32 single Action Units (AUs) which are generated by an underlying muscle action that interact in different ways to create facial expressions. Because FACS AU describes atomic facial distortions using facial muscles, a face model that can allow AU mappings to be applied directly on the respective muscles is desirable. Hence for this task we choose the Waters anatomy-based face model due to its simplicity and implementation of pseudo muscles. However Waters face model is limited in its ability to create realistic expressions mainly the lack of a function to represent sheet muscles, unrealistic jaw rotation function and improper implementation of sphincter muscles. Therefore in this work we provide enhancements to the Waters facial model by improving its UI, adding sheet muscles, providing an alternative implementation to the jaw rotation function, presenting a new sphincter muscle model that can be used around the eyes and changes to operation of the sphincter muscle used around the mouth

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    A multimedia testbed for facial animation control

    Get PDF
    This paper presents an open testbed for controlling facial animation. The adopted controlling means can act at different levels of abstraction (specification). These means of control can be associated with different interactive devices and media thereby allowing a greater flexibility and freedom to the animator. Possibility of integration and mixing of control means provides a general platform where a user can experiment with his choice of control method. Experiments with input accessories like the keyboard of a music sinthesizer and gestures from the DataGlove are illustrated.59-7

    A physically-based muscle and skin model for facial animation

    Get PDF
    Facial animation is a popular area of research which has been around for over thirty years, but even with this long time scale, automatically creating realistic facial expressions is still an unsolved goal. This work furthers the state of the art in computer facial animation by introducing a new muscle and skin model and a method of easily transferring a full muscle and bone animation setup from one head mesh to another with very little user input. The developed muscle model allows muscles of any shape to be accurately simulated, preserving volume during contraction and interacting with surrounding muscles and skin in a lifelike manner. The muscles can drive a rigid body model of a jaw, giving realistic physically-based movement to all areas of the face. The skin model has multiple layers, mimicking the natural structure of skin and it connects onto the muscle model and is deformed realistically by the movements of the muscles and underlying bones. The skin smoothly transfers underlying movements into skin surface movements and propagates forces smoothly across the face. Once a head model has been set up with muscles and bones, moving this muscle and bone set to another head is a simple matter using the developed techniques. The developed software employs principles from forensic reconstruction, using specific landmarks on the head to map the bone and muscles to the new head model and once the muscles and skull have been quickly transferred, they provide animation capabilities on the new mesh within minutes

    Physically-based forehead animation including wrinkles

    Get PDF
    Physically-based animation techniques enable more realistic and accurate animation to be created. We present a fully physically-based approach for efficiently producing realistic-looking animations of facial movement, including animation of expressive wrinkles. This involves simulation of detailed voxel-based models using a graphics processing unit-based total Lagrangian explicit dynamic finite element solver with an anatomical muscle contraction model, and advanced boundary conditions that can model the sliding of soft tissue over the skull. The flexibility of our approach enables detailed animations of gross and fine-scale soft-tissue movement to be easily produced with different muscle structures and material parameters, for example, to animate different aged skins. Although we focus on the forehead, our approach can be used to animate any multi-layered soft body

    Exporting Vector Muscles for Facial Animation

    Get PDF
    In this paper we introduce a method of exporting vector muscles from one 3D face to another for facial animation. Starting from a 3D face with an extended version of Waters' linear muscle system, we transfer the linear muscles to a target 3D face. We also transfer the region division, which is used to increase the performance of the muscle as well as to control the animation. The human involvement is just as simple as selecting the faces which shows the most natural facial expressions in the animator's view. The method allows the transfer of the animation to a new 3D model within a short time. The transferred muscles can then be used to create new animations
    corecore