17 research outputs found

    A biomechanical model of the face including muscles for the prediction of deformations during speech production

    Full text link
    A 3D biomechanical finite element model of the face is presented. Muscles are represented by piece-wise uniaxial tension cable elements linking the insertion points. Such insertion points are specific entities differing from nodes of the finite element mesh, which makes possible to change either the mesh or the muscle implementation totally independently of each other. Lip/teeth and upper lip/lower lip contacts are also modeled. Simulations of smiling and of an Orbicularis Oris activation are presented and interpreted. The importance of a proper account of contacts and of an accurate anatomical description is show

    Advanced ODE Based Head Modelling for Chinese Marionette Art Preservation

    Get PDF
    Puppetry has been a popular art form for many centuries in different cultures, which becomes a valuable and fascinating heritage assert. Traditional Chinese marionette art with over 2,000 years history is one of the most representative forms offering a mixture of stage performance of singing, dancing, music, poem, opera, story narrative and action. Apart from a set of string rules which controls the dynamics, head carving skill is another important pillar in this art form. This paper addresses the heritage preservation of the marionette head carving by digitalizing the head models with a novel modelling technique using ordinary differential equations (ODEs). The technique has been specially tailored to suit the modelling complexity and the need of accurate description of shapes. It offers smoothly sewing ODE swept patches to represent the distinct features of a marionette head with sharp variance of local geometry. Such features otherwise are difficult to model and capture accurately, which may require a great effort and tedious hand-crafting of an experienced modeller, when using other representation forms like polygons

    A framework for automatic and perceptually valid facial expression generation

    Get PDF
    Facial expressions are facial movements reflecting the internal emotional states of a character or in response to social communications. Realistic facial animation should consider at least two factors: believable visual effect and valid facial movements. However, most research tends to separate these two issues. In this paper, we present a framework for generating 3D facial expressions considering both the visual the dynamics effect. A facial expression mapping approach based on local geometry encoding is proposed, which encodes deformation in the 1-ring vector. This method is capable of mapping subtle facial movements without considering those shape and topological constraints. Facial expression mapping is achieved through three steps: correspondence establishment, deviation transfer and movement mapping. Deviation is transferred to the conformal face space through minimizing the error function. This function is formed by the source neutral and the deformed face model related by those transformation matrices in 1-ring neighborhood. The transformation matrix in 1-ring neighborhood is independent of the face shape and the mesh topology. After the facial expression mapping, dynamic parameters are then integrated with facial expressions for generating valid facial expressions. The dynamic parameters were generated based on psychophysical methods. The efficiency and effectiveness of the proposed methods have been tested using various face models with different shapes and topological representations

    Expressive Facial Gestures From Motion Capture Data

    Full text link

    Automatisation des expressions faciales liées à l’activité physique

    Get PDF
    Les expressions faciales sont complexes à modéliser car elles sont dues à divers facteurs, principalement psychologiques, biomécaniques et sensoriels. Bien que les diverses techniques de capture faciale permettent d’atteindre un résultat réaliste, elles sont coûteuses et difficiles à utiliser dans un contexte d’interaction en temps réel. Un certain nombre d’outils et de techniques existent pour automatiser l’animation faciale liée à la parole ou à l’émotion, mais il n’existe pas d’outil permettant d’automatiser l’expression faciale liée à l’activité physique. Cela conduit souvent à des personnages peu réalistes, surtout lorsque les personnages en 3D effectuent des activités physiques intenses. Le but de cette recherche est de mettre en évidence le lien entre l’activité physique et l’expression faciale et de proposer une approche basée sur des données réelles afin d’améliorer le réalisme de l’expression faciale tout en laissant un contrôle créatif. Tout d’abord, des captures de mouvement ont été réalisées afin de recueillir des données liant les aspects biologiques, les aspects mécaniques et l’expression faciale. Il s’agissait de deux types de séances de capture, chacune permettant d’acquérir des données spécifiques et impliquant plusieurs participants. Ces données ont été utilisées afin d’entraîner les modèles d’apprentissage machine qui permettent de prédire les expressions faciales à partir des intrants tels que le mouvement du personnage 3D, les poids soulevés, etc. L’approche proposée peut être utilisée en temps réel, avec des animations pré-enregistrées ou par clés, ce qui la rend utilisable aussi bien pour le jeu vidéo que le film d’animation ou d’effets spéciaux

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    Facial soft tissue segmentation

    Get PDF
    The importance of the face for socio-ecological interaction is the cause for a high demand on any surgical intervention on the facial musculo-skeletal system. Bones and soft-tissues are of major importance for any facial surgical treatment to guarantee an optimal, functional and aesthetical result. For this reason, surgeons want to pre-operatively plan, simulate and predict the outcome of the surgery allowing for shorter operation times and improved quality. Accurate simulation requires exact segmentation knowledge of the facial tissues. Thus semi-automatic segmentation techniques are required. This thesis proposes semi-automatic methods for segmentation of the facial soft-tissues, such as muscles, skin and fat, from CT and MRI datasets, using a Markov Random Fields (MRF) framework. Due to image noise, artifacts, weak edges and multiple objects of similar appearance in close proximity, it is difficult to segment the object of interest by using image information alone. Segmentations would leak at weak edges into neighboring structures that have a similar intensity profile. To overcome this problem, additional shape knowledge is incorporated in the energy function which can then be minimized using Graph-Cuts (GC). Incremental approaches by incorporating additional prior shape knowledge are presented. The proposed approaches are not object specific and can be applied to segment any class of objects be that anatomical or non-anatomical from medical or non-medical image datasets, whenever a statistical model is present. In the first approach a 3D mean shape template is used as shape prior, which is integrated into the MRF based energy function. Here, the shape knowledge is encoded into the data and the smoothness terms of the energy function that constrains the segmented parts to a reasonable shape. In the second approach, to improve handling of shape variations naturally found in the population, the fixed shape template is replaced by a more robust 3D statistical shape model based on Probabilistic Principal Component Analysis (PPCA). The advantages of using the Probabilistic PCA are that it allows reconstructing the optimal shape and computing the remaining variance of the statistical model from partial information. By using an iterative method, the statistical shape model is then refined using image based cues to get a better fitting of the statistical model to the patient's muscle anatomy. These image cues are based on the segmented muscle, edge information and intensity likelihood of the muscle. Here, a linear shape update mechanism is used to fit the statistical model to the image based cues. In the third approach, the shape refinement step is further improved by using a non-linear shape update mechanism where vertices of the 3D mesh of the statistical model incur the non-linear penalty depending on the remaining variability of the vertex. The non-linear shape update mechanism provides a more accurate shape update and helps in a finer shape fitting of the statistical model to the image based cues in areas where the shape variability is high. Finally, a unified approach is presented to segment the relevant facial muscles and the remaining facial soft-tissues (skin and fat). One soft-tissue layer is removed at a time such as the head and non-head regions followed by the skin. In the next step, bones are removed from the dataset, followed by the separation of the brain and non-brain regions as well as the removal of air cavities. Afterwards, facial fat is segmented using the standard Graph-Cuts approach. After separating the important anatomical structures, finally, a 3D fixed shape template mesh of the facial muscles is used to segment the relevant facial muscles. The proposed methods are tested on the challenging example of segmenting the masseter muscle. The datasets were noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. dental fillings and dental implants. Qualitative and quantitative experimental results show that by incorporating prior shape knowledge leaking can be effectively constrained to obtain better segmentation results

    Expresión de emociones de alegría para personajes virtuales mediante la risa y la sonrisa

    Get PDF
    La animación facial es uno de los tópicos todavía no resueltos tanto en el campo de la interacción hombre máquina como en el de la informática gráfica. Las expresiones de alegría asociadas a risa y sonrisa son por su significado e importancia, parte fundamental de estos campos. En esta tesis se hace una aproximación a la representación de los diferentes tipos de risa en animación facial a la vez que se presenta un nuevo método capaz de reproducir todos estos tipos. El método se valida mediante la recreación de secuencias cinematográficas y mediante la utilización de bases de datos de expresiones faciales genéricas y específicas de sonrisa. Adicionalmente se crea una base de datos propia que recopila los diferentes tipos de risas clasificados y generados en este trabajo. De acuerdo a esta base de datos propia se generan las expresiones más representativas de cada una de las risas y sonrisas consideradas en el estudio.L'animació facial és un dels tòpics encara no resolts tant en el camp de la interacció home màquina com en el de la informàtica gràfica. Les expressions d'alegria associades a riure i somriure són pel seu significat i importància, part fonamental d'aquests camps. En aquesta tesi es fa una aproximació a la representació dels diferents tipus de riure en animació facial alhora que es presenta un nou mètode capaç de reproduir tots aquests tipus. El mètode es valida mitjançant la recreació de seqüències cinematogràfiques i mitjançant la utilització de bases de dades d'expressions facials genèriques i específiques de somriure. Addicionalment es crea una base de dades pròpia que recull els diferents tipus de rialles classificats i generats en aquest treball. D'acord a aquesta base de dades pròpia es generen les expressions més representatives de cadascuna de les rialles i somriures considerades en l'estudi.Nowadays, facial animation is one of the most relevant research topics still unresolved both in the field of human machine interaction and in the computer graphics. Expressions of joy associated with laughter and smiling are a key part of these fields mainly due to its meaning and importance. In this thesis an approach to the representation of different types of laughter in facial animation is done while a new method to reproduce all these types is proposed. The method is validated by recreating movie sequences and using databases of generic and specific facial smile expressions. Additionally, a proprietary database that lists the different types of classified and generated laughs in this work is created. According to this proprietary database the most representative of every smile expression considered in the study is generated
    corecore