379 research outputs found

    Facial and Bodily Expressions for Control and Adaptation of Games (ECAG 2008)

    Get PDF

    Accurate and Interpretable Solution of the Inverse Rig for Realistic Blendshape Models with Quadratic Corrective Terms

    Full text link
    We propose a new model-based algorithm solving the inverse rig problem in facial animation retargeting, exhibiting higher accuracy of the fit and sparser, more interpretable weight vector compared to SOTA. The proposed method targets a specific subdomain of human face animation - highly-realistic blendshape models used in the production of movies and video games. In this paper, we formulate an optimization problem that takes into account all the requirements of targeted models. Our objective goes beyond a linear blendshape model and employs the quadratic corrective terms necessary for correctly fitting fine details of the mesh. We show that the solution to the proposed problem yields highly accurate mesh reconstruction even when general-purpose solvers, like SQP, are used. The results obtained using SQP are highly accurate in the mesh space but do not exhibit favorable qualities in terms of weight sparsity and smoothness, and for this reason, we further propose a novel algorithm relying on a MM technique. The algorithm is specifically suited for solving the proposed objective, yielding a high-accuracy mesh fit while respecting the constraints and producing a sparse and smooth set of weights easy to manipulate and interpret by artists. Our algorithm is benchmarked with SOTA approaches, and shows an overall superiority of the results, yielding a smooth animation reconstruction with a relative improvement up to 45 percent in root mean squared mesh error while keeping the cardinality comparable with benchmark methods. This paper gives a comprehensive set of evaluation metrics that cover different aspects of the solution, including mesh accuracy, sparsity of the weights, and smoothness of the animation curves, as well as the appearance of the produced animation, which human experts evaluated

    Menetelmä 3D-mallin animointijärjestelmän automaattiseen luomiseen älylaitteella lisätyssä todellisuudessa

    Get PDF
    3D modeling has become more popular among novice users in the recent years. The ubiquity of mobile devices has led to the need to view and edit 3D content even beyond the traditional desktop workstations. This thesis develops an approach for editing mesh-based 3D models in mobile augmented reality. The developed approach takes a static 3D model and automatically generates a rig with control handles so that the user can pose the model interactively. The rig is generated by approximating the model with a structure called sphere mesh. To attach the generated spheres to the model, a technique called bone heat skinning is used. A direct manipulation scheme is presented to allow the user to pose the processed model with intuitive touch controls. Both translation and rotation operations are developed to give the user expressive power over the pose of the model without overly complicating the controls. Several example scenes are built and analyzed. The scenes show that the developed approach can be used to build novel scenes in augmented reality. The implementation of the approach is measured to be close to real time with the processing times around one second for the used models. The rig generation is shown to yield semantically coherent control handles especially at lower resolutions. While the chosen bone heat skinning algorithm has theoretical shortcomings, they were not apparent in the built examples.3D-mallinnus on kasvattanut suosiotaan ei-ammattimaisten käyttäjien keskuudessa viime vuosina. Mobiililaitteiden yleistyminen on johtanut tarpeeseen katsella ja muokata 3D-malleja myös perinteisten työasemien ulkopuolella. Tämä diplomityö kehittää menetelmän verkkorakenteisten 3D-mallien muokkaamiseen lisätyssä todellisuudessa mobiililaitteilla. Kehitetty menetelmä luo staattiselle 3D-mallille animaatiojärjestelmän ohjauskahvoineen automaattisesti. Näin käyttäjä voi interaktiivisesti muuttaa 3D-mallin asentoa. Animaatiojärjestelmä luodaan muodostamalla mallille likiarvoistus pallomallirakenteella. Luodut pallot kiinnitetään malliin nk. luulämpöpinnoitusmenetelmällä. Mallin asennon muokkaamiseksi esitellään suorakäyttöjärjestelmä, jossa käyttäjä voi käsitellä mallia helppokäyttöisin kosketusnäyttöelein. Työssä kehitetään sekä siirto- että pyöritysoperaatiot, jotta käyttäjä voi muokata mallia monipuolisesti ja vaivattomasti. Menetelmän toimivuuden osoittamiseksi työssä luodaan ja analysoidaan esimerkkejä, jotka eivät olisi mahdollisia ilman menetelmän hyödyntämistä. Menetelmän tekninen toteutus on mittausten perusteella lähes tosiaikainen ja käytettyjen mallien käsittelyajat ovat lähellä yhtä sekuntia. Luodut animaatiojärjestelmät ovat semanttisesti merkittäviä erityisesti alhaisemmilla tarkkuuksilla. Vaikka luulämpöpinnoitukseen liittyy teoreettisia ongelmia, ne eivät näkyneet luoduissa esimerkeissä

    Emotional avatars

    Get PDF

    Analysis and Construction of Engaging Facial Forms and Expressions: Interdisciplinary Approaches from Art, Anatomy, Engineering, Cultural Studies, and Psychology

    Get PDF
    The topic of this dissertation is the anatomical, psychological, and cultural examination of a human face in order to effectively construct an anatomy-driven 3D virtual face customization and action model. In order to gain a broad perspective of all aspects of a face, theories and methodology from the fields of art, engineering, anatomy, psychology, and cultural studies have been analyzed and implemented. The computer generated facial customization and action model were designed based on the collected data. Using this customization system, culturally-specific attractive face in Korean popular culture, “kot-mi-nam (flower-like beautiful guy),” was modeled and analyzed as a case study. The “kot-mi-nam” phenomenon is overviewed in textual, visual, and contextual aspects, which reveals the gender- and sexuality-fluidity of its masculinity. The analysis and the actual development of the model organically co-construct each other requiring an interwoven process. Chapter 1 introduces anatomical studies of a human face, psychological theories of face recognition and an attractive face, and state-of-the-art face construction projects in the various fields. Chapter 2 and 3 present the Bezier curve-based 3D facial customization (BCFC) and Multi-layered Facial Action Model (MFAF) based on the analysis of human anatomy, to achieve a cost-effective yet realistic quality of facial animation without using 3D scanned data. In the experiments, results for the facial customization for gender, race, fat, and age showed that BCFC achieved enhanced performance of 25.20% compared to existing program Facegen , and 44.12% compared to Facial Studio. The experimental results also proved the realistic quality and effectiveness of MFAM compared with blend shape technique by enhancing 2.87% and 0.03% of facial area for happiness and anger expressions per second, respectively. In Chapter 4, according to the analysis based on BCFC, the 3D face of an average kot-mi-nam is close to gender neutral (male: 50.38%, female: 49.62%), and Caucasian (66.42-66.40%). Culturally-specific images can be misinterpreted in different cultures, due to their different languages, histories, and contexts. This research demonstrates that facial images can be affected by the cultural tastes of the makers and can also be interpreted differently by viewers in different cultures

    Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

    Full text link
    To facilitate the analysis of human actions, interactions and emotions, we compute a 3D model of human body pose, hand pose, and facial expression from a single monocular image. To achieve this, we use thousands of 3D scans to train a new, unified, 3D model of the human body, SMPL-X, that extends SMPL with fully articulated hands and an expressive face. Learning to regress the parameters of SMPL-X directly from images is challenging without paired images and 3D ground truth. Consequently, we follow the approach of SMPLify, which estimates 2D features and then optimizes model parameters to fit the features. We improve on SMPLify in several significant ways: (1) we detect 2D features corresponding to the face, hands, and feet and fit the full SMPL-X model to these; (2) we train a new neural network pose prior using a large MoCap dataset; (3) we define a new interpenetration penalty that is both fast and accurate; (4) we automatically detect gender and the appropriate body models (male, female, or neutral); (5) our PyTorch implementation achieves a speedup of more than 8x over Chumpy. We use the new method, SMPLify-X, to fit SMPL-X to both controlled images and images in the wild. We evaluate 3D accuracy on a new curated dataset comprising 100 images with pseudo ground-truth. This is a step towards automatic expressive human capture from monocular RGB data. The models, code, and data are available for research purposes at https://smpl-x.is.tue.mpg.de.Comment: To appear in CVPR 201

    Exploration of Mouth Shading and Lighting in CG Production

    Get PDF
    The lighting and shading of human teeth in current computer animation features and live-action movies with effects are often intentionally avoided or processed by simple methods since they interact with light in complex ways through their intricate layered structure. The semi-translucent appearance of natural human teeth which result from subsurface scattering is difficult to replicate in synthetic scenes, though two techniques are often implemented. The first technique is to create an anatomically correct layered model, and render the teeth with both theoretically and empirically derived optical parameters of human teeth using physical subsurface materials. The second technique largely takes advantage of visual cheating, achieved by irradiance blending of finely painted textures. The result visually confirms that for most situations, non-physically based shading can yield believable rendered teeth by finely controlling contribution layers. In particular situations, such as an extremely close shot of a mouth, however, a physically correct shading model is necessary to produce highly translucent and realistic teeth
    corecore