1,823 research outputs found

    Enhanced waters 2D muscle model for facial expression generation

    Get PDF
    In this paper we present an improved Waters facial model used as an avatar for work published in (Kumar and Vanualailai, 2016), which described a Facial Animation System driven by the Facial Action Coding System (FACS) in a low-bandwidth video streaming setting. FACS defines 32 single Action Units (AUs) which are generated by an underlying muscle action that interact in different ways to create facial expressions. Because FACS AU describes atomic facial distortions using facial muscles, a face model that can allow AU mappings to be applied directly on the respective muscles is desirable. Hence for this task we choose the Waters anatomy-based face model due to its simplicity and implementation of pseudo muscles. However Waters face model is limited in its ability to create realistic expressions mainly the lack of a function to represent sheet muscles, unrealistic jaw rotation function and improper implementation of sphincter muscles. Therefore in this work we provide enhancements to the Waters facial model by improving its UI, adding sheet muscles, providing an alternative implementation to the jaw rotation function, presenting a new sphincter muscle model that can be used around the eyes and changes to operation of the sphincter muscle used around the mouth

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    Final Report to NSF of the Standards for Facial Animation Workshop

    Get PDF
    The human face is an important and complex communication channel. It is a very familiar and sensitive object of human perception. The facial animation field has increased greatly in the past few years as fast computer graphics workstations have made the modeling and real-time animation of hundreds of thousands of polygons affordable and almost commonplace. Many applications have been developed such as teleconferencing, surgery, information assistance systems, games, and entertainment. To solve these different problems, different approaches for both animation control and modeling have been developed

    CASA 2009:International Conference on Computer Animation and Social Agents

    Get PDF

    Example Based Caricature Synthesis

    Get PDF
    The likeness of a caricature to the original face image is an essential and often overlooked part of caricature production. In this paper we present an example based caricature synthesis technique, consisting of shape exaggeration, relationship exaggeration, and optimization for likeness. Rather than relying on a large training set of caricature face pairs, our shape exaggeration step is based on only one or a small number of examples of facial features. The relationship exaggeration step introduces two definitions which facilitate global facial feature synthesis. The first is the T-Shape rule, which describes the relative relationship between the facial elements in an intuitive manner. The second is the so called proportions, which characterizes the facial features in a proportion form. Finally we introduce a similarity metric as the likeness metric based on the Modified Hausdorff Distance (MHD) which allows us to optimize the configuration of facial elements, maximizing likeness while satisfying a number of constraints. The effectiveness of our algorithm is demonstrated with experimental results

    ANALYSIS OF FACIAL EXPRESSIONS IN PATIENTS WITH SCHZIOPHRENIA, IN COMPARISON WITH A HEALTHY CONTROL - CASE STUDY

    Get PDF
    Introduction: Deficits in area of communication, crucial for maintaining proper social bonds, may have a prominent adverse impact on quality of life in patients with schizophrenia. Social exclusion, lack of employment and deterioration of family life, may be consequences of aggravated social competencies, caused by inability to properly exhibit and interpret facial expressions. Although this phenomenon is known since first clinical descriptions of schizophrenia, lack of proper methodology limited our knowledge in this area. Aim of our study was to compare facial expressivity of the patient with schizophrenia, and the healthy individual. Methods: 47-years old patient suffering from schizophrenia, and 36-years old healthy individual were invited to participate in our study. They underwent the examination in Human Facial Modelling Lab in Polish-Japanese Institute of Information Technology in Bytom (Silesia, Katowice). Both participants were presented with two video materials, first one contained different facial expressions, which they had to imitate. Second one a part of comedy show, during which spontaneous reactions were recorded. Acquisition of facial expressions was conducted with marker-based technology of modelling. Obtained data was analyzed using Microsoft Excel. Results and conclusions: An overall facial expression intensity, expressed as an average value of distances traveled by markers during shifts from neutral position was higher in case of a healthy participant during both part of the study. The difference was especially visible in case of an upper half of the face. Utilization of marker-based methods in analysis of human facial expressions seem to be reliable and remarkably accurate

    ANALYSIS OF FACIAL EXPRESSIONS IN PATIENTS WITH SCHZIOPHRENIA, IN COMPARISON WITH A HEALTHY CONTROL - CASE STUDY

    Get PDF
    Introduction: Deficits in area of communication, crucial for maintaining proper social bonds, may have a prominent adverse impact on quality of life in patients with schizophrenia. Social exclusion, lack of employment and deterioration of family life, may be consequences of aggravated social competencies, caused by inability to properly exhibit and interpret facial expressions. Although this phenomenon is known since first clinical descriptions of schizophrenia, lack of proper methodology limited our knowledge in this area. Aim of our study was to compare facial expressivity of the patient with schizophrenia, and the healthy individual. Methods: 47-years old patient suffering from schizophrenia, and 36-years old healthy individual were invited to participate in our study. They underwent the examination in Human Facial Modelling Lab in Polish-Japanese Institute of Information Technology in Bytom (Silesia, Katowice). Both participants were presented with two video materials, first one contained different facial expressions, which they had to imitate. Second one a part of comedy show, during which spontaneous reactions were recorded. Acquisition of facial expressions was conducted with marker-based technology of modelling. Obtained data was analyzed using Microsoft Excel. Results and conclusions: An overall facial expression intensity, expressed as an average value of distances traveled by markers during shifts from neutral position was higher in case of a healthy participant during both part of the study. The difference was especially visible in case of an upper half of the face. Utilization of marker-based methods in analysis of human facial expressions seem to be reliable and remarkably accurate

    Automatic Video Self Modeling for Voice Disorder

    Get PDF
    Video self modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of him- or herself. In the field of speech language pathology, the approach of VSM has been successfully used for treatment of language in children with Autism and in individuals with fluency disorder of stuttering. Technical challenges remain in creating VSM contents that depict previously unseen behaviors. In this paper, we propose a novel system that synthesizes new video sequences for VSM treatment of patients with voice disorders. Starting with a video recording of a voice-disorder patient, the proposed system replaces the coarse speech with a clean, healthier speech that bears resemblance to the patient’s original voice. The replacement speech is synthesized using either a text-to-speech engine or selecting from a database of clean speeches based on a voice similarity metric. To realign the replacement speech with the original video, a novel audiovisual algorithm that combines audio segmentation with lip-state detection is proposed to identify corresponding time markers in the audio and video tracks. Lip synchronization is then accomplished by using an adaptive video re-sampling scheme that minimizes the amount of motion jitter and preserves the spatial sharpness. Results of both objective measurements and subjective evaluations on a dataset with 31 subjects demonstrate the effectiveness of the proposed techniques

    Microanalysis of nonverbal communication: Development of a nonverbal research method using high-performance 3D character animation

    Get PDF
    This work provides a novel research tool for the field of nonverbal communication, with the goal being to transform 3D motion data into metric measurements that allow for the application of standard statistical methods such as analysis of variance, factor analysis, or multiple regression analysis. 3D motion data are automatically captured by motion capture systems or manually coded by humans using 3D character animation software. They precisely describe human movements, but without any furter data processing, they cannot meaningfully be interpreted and statistically analyzed. To make this possible, three nonverbal coding systems describing static body postures, dynamic body movements, and proper body part motions such as head nods have been developed. A geometrical model describing postures and movements as flexion angles of body parts on three clearly understandable and nonverbal relevant dimensions—the sagittal, the rotational, and the lateral—has been developed and provides the basis for math formulas which allow the transformation of motion capture data or 3D animation data into metric measures. Furthermore, math formulas were developed to compute around 30 nonverbal cues described in the literature on kinesics that can be understood as geometrical features of body parts such as openness, symmetry, and expansiveness of body postures, head position and head nods, gaze direction and body orientation, pointing behavior and relational gestures, interactional synchrony, proxemics, and touch, including dynamic features of movements such as rate, velocity, and acceleration. To obtain accurate measurements, the software APEx (Automatic Parameter Extraction) has been developed with a number of convenient features extracting more than 150 nonverbal parameters consisting 380 metric variables out of available motion data
    corecore