11 research outputs found

    Exploring How Faces Reveal Our Ethnicity

    Get PDF
    The human face varies with ethnicity as well as individually within any ethnotype. The ethnicvariation of the human face is seldom explicitly addressed in education. It would be of great value to foster the appreciation of the face as telling the story of the commonality of all of humankind and the diversity in our global distribution. Faces tell about origins and cultures. The language with which a face tells this story should be taught. It is a language not of words but of shapes, specifically three-dimensional shapes. Modern technology enables immersive visualization of three-dimensional shape in compelling ways that facilitate our learning a language with which to describe faces. An interactive animation framework is introduced that allows exploration of the space of ethnic variation via a set of intuitive, human understandable, facial shape properties. Parametric variation in these properties make explicit how our faces reveal our ethnicity

    Auto Lip-Sync Pada Karakter Virtual 3 Dimensi Menggunakan Blendshape

    Get PDF
    Proses pembuatan karakter virtual 3D yang dapat berbicara seperti manusia merupakan tantangan tersendiri bagi animator. Problematika yang muncul adalah dibutuhkan waktu lama dalam proses pengerjaan serta kompleksitas dari berbagai macam fonem penyusun kalimat. Teknik auto lip-sync digunakan untuk melakukan pembentukan karakter virtual 3D yang dapat berbicara seperti manusia pada umumnya. Preston blair phoneme series dijadikan acuan sebagai pembentukan viseme dalam karakter. Proses pemecahan fonem dan sinkronisasi audio dalam software 3D menjadi tahapan akhir dalam proses pembentukan auto lip-sync dalam karakter virtual 3D. Auto Lip-Sync on 3D Virtual Character Using Blendshape. Process of making a 3D virtual character who can speak like humans is a challenge for the animators. The problem that arise is that it takes a long time in the process as well as the complexity of the various phonemes making up sentences. Auto lip-sync technique is used to make the formation of a 3D virtual character who can speak like humans in general. Preston Blair phoneme series used as the reference in forming viseme in character. The phonemes solving process and audio synchronization in 3D software becomes the final stage in the process of auto lip-sync in a 3D virtual character

    Enhanced facial expression using oxygenation absorption of facial skin

    Get PDF
    Facial skin appearance is affected by physical and physiological state of the skin. The facial expression especially the skin appearances are in constant mutability and dynamically changed as human behave, talk and stress. The color of skin is considered to be one of the key indicators for these symptoms. The skin color resolution is highly determined by the scattering and absorption of light within the skin layers. The concentration of chromophores in melanin and hemoglobin oxygenation in the blood plays a pivotal role. An improvement work on prior model to create a realistic textured three-dimensional (3D) facial model for animation is proposed. This thesis considers both surface and subsurface scattering capable of simulating the interaction of light with the human skin. Furthermore, six parameters are used in this research which are the amount of oxygenation, de-oxygenation, hemoglobin, melanin, oil and blend factor for different types of melanin in the skin to generate a perfect match to specific skin types. The proposed model is associated with Blend Shape Interpolation and Facial Action Coding System to create five basic facial emotional expressions namely anger, happy, neutral, sad and fear. Meanwhile, the correlation between blood oxygenation in changing facial skin color for basic natural emotional expressions are measured using the Pulse Oximetry and 3D skin analyzer. The data from different subjects with male and female under different number of partially extreme facial expressions are fed in the model for simulation. The multi-pole method for layered materials is used to calculate the spectral diffusion profiles of two-layered skin which are further utilized to simulate the subsurface scattering of light within the skin. While the subsurface scattering is further combined with the Torrance-Sparrow Bidirectional Reflectance Distribution Function (BRDF) model to simulate the interaction of light with an oily layer at the skin surface. The result is validated by an evaluation procedure for measuring the accountability of a facial model via expressions and skin color of proposed model to the real human. The facial expressions evaluation is verified by calculating Euclidean distance between the facial markers of the real human and the avatar. The second assessment validates the skin color of facial expressions for the proposed avatar via the extraction of Histogram Color Features and Color Coherence Vector of each image with the real human and the previous work. The experimental result shows around 5.12 percent improvement compared to previous work. In achieving the realistic facial expression for virtual human based on facial skin color, texture and oxygenation of hemoglobin, the result demonstrates that the proposed model is beneficial to the development of virtual reality and game environment of computer aided graphics animation systems

    Sintesis Ekspresi Wajah Realistik Berbasis Feature-Point Cluster Menggunakan Radial Basis Function

    Get PDF
    Meningkatnya permintaan produk animasi oleh rumah produksi dan stasiun televisi menuntut adanya perubahan yang signifikan di dalam proses produksi animasi. Penelitian animasi ekspresi pada wajah khususnya mengenai proses rigging dan pemindahan ekspresi semakin banyak. Pendekatan tradisional animasi ekspresi wajah sangat tergantung pada animator dalam pembuatan gerakan kunci dan rangkaian gerakan ekspresi wajah. Hal ini menyebabkan produksi animasi wajah untuk satu wajah tidak dapat digunakan ulang secara langsung untuk wajah lainnya karena kekhususannya tersebut. Oleh karena itu proses otomatisasi pembentukan area pembobotan pada model wajah 3D dengan pendekatan cluster berikut proses duplikasi gerak yang adaptif terhadap bentuk wajah untuk mempersingkat proses produksi animasi sangat penting. Prinsip animasi dipandang sebagai salah satu solusi dan panduan untuk pembuatan animasi gerak wajah yang ekspresif dan hidup. Sintesis ekspresi wajah realistik dapat dibuat dengan basis feature-point cluster menggunakan radial basis function. Otomatisasi pembentukan area gerak di wajah hasil proses clustering berdasarkan letak fitur titik dan proses retargeting menggunakan radial basis function untuk melakukan sintesis ekspresi wajah realistik merupakan kebaruan yang diangkat pada penelitian ini. Berdasarkan semua tahapan eksperimentasi yang dilakukan dapat disimpulkan bahwa sintesis ekspresi wajah realistik dengan basis feature-point cluster menggunakan radial basis function dapat diterapkan pada beragam model wajah 3D dan dapat secara adaptif peka terhadap bentuk wajah dari masing-masing model 3D yang memiliki jumlah fitur penanda yang sama. Hasil persepsi visual evaluasi penerapan sintesis ekspresi wajah realistik menunjukkan hasil ekspresi terkejut memiliki persentasi paling tinggi mudah dikenali, yaitu: 89,32%. Ekspresi senang: 84,63 %, ekspresi sedih: 77,32%, ekspresi marah: 76,64%, ekspresi jijik: 76,45%, serta ekspresi takut: 76,44%. Rerata persentase wajah mudah dikenali sebesar 80,13%. ================================================================================================================== The increasing demand of animated movies by production houses and television stations needs a significant change in the animation production process. Computer facial animation research on the process of rigging and expression transfer is growing. The traditional approach of facial animation is highly dependent on the animator in making the key and the sequence of facial expression movements. This causes the production of facial animation for one face can not be reused directly for the other face because of its uniqueness. Therefore, the process of automating the formation of weighted areas on 3D face model with cluster approach and adaptive motion transfer process to face shape is very important to shorten the production process of animation. The principle of animation is seen as one of the solutions and guidelines for the creation of animated facial expression expressively. The synthesis of realistic facial expression can be made on the basis of a feature-point cluster using a radial basis function. Automation process for formatting the motion area in the face by clustering process based on the location of the feature-point and retargeting process using radial basis function to perform synthesis of realistic facial expression is the novelty of this research. Based on all experimentation stages, it can be concluded that the synthesis of realistic facial expression based on a feature-point cluster using radial basis function can be applied to various 3D face models and can be adaptively sensitive to the facial shape of each 3D model which has the same number of marker features. The results of visual perception evaluation from the synthesis of realistic facial expression show that surprise expression has the highest percentage and easily recognizable, 89,32%. Happy expression: 84,63%, sad expression: 77,32%, angry expression: 76,64%, disgust expression: 76,45%, and a fear expression: 76,44%. The average percentage of faces is easily recognizable at 80,13%

    Reducing Blendshape Interference by Selected Motion Attenuation

    No full text
    Fig. 1: (a) We attempt to mimic the “Jack Nicholson ” expression of partially closed eyes with an arched eyebrow. First the eyelids are partially closed. Blendshapes (linear shape interpolation models) are perhaps the most commonly employed technique in facial animation practice. A major problem in creating blendshape animation is that of blendshape interference: the adjustment of a single blendshape “slider” may degrade the effects obtained with previous slider movements, because the blendshapes have overlapping, non-orthogonal effects. Because models used in commercial practice may have 100 or more individual blendshapes, the interference problem is the subject of considerable manual effort. Modelers iteratively resculpt models to reduce interference where possible, and animators must compensate for those interference effects that remain. In this short paper we consider the blendshape interference problem from a linear algebra point of view. We find that while full orthogonality is not desirable, the goal of preserving previous adjustments to the model can be effectively approached by allowing the user to temporarily designate a set of points as representative of the previous (desired) adjustments. We then simply solve for blendshape slider values that mimic desired new movement while moving these “tagged ” points as little as possible. The resulting algorithm is easy to implement and demonstrably reduces cases of blendshape interference found in existing models

    Mesh modification using deformation gradients

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 117-131).Computer-generated character animation, where human or anthropomorphic characters are animated to tell a story, holds tremendous potential to enrich education, human communication, perception, and entertainment. However, current animation procedures rely on a time consuming and difficult process that requires both artistic talent and technical expertise. Despite the tremendous amount of artistry, skill, and time dedicated to the animation process, there are few techniques to help with reuse. Although individual aspects of animation are well explored, there is little work that extends beyond the boundaries of any one area. As a consequence, the same procedure must be followed for each new character without the opportunity to generalize or reuse technical components. This dissertation describes techniques that ease the animation process by offering opportunities for reuse and a more intuitive animation formulation. A differential specification of arbitrary deformation provides a general representation for adapting deformation to different shapes, computing semantic correspondence between two shapes, and extrapolating natural deformation from a finite set of examples.(cont.) Deformation transfer adds a general-purpose reuse mechanism to the animation pipeline by transferring any deformation of a source triangle mesh onto a different target mesh. The transfer system uses a correspondence algorithm to build a discrete many-to-many mapping between the source and target triangles that permits transfer between meshes of different topology. Results demonstrate retargeting both kinematic poses and non-rigid deformations, as well as transfer between characters of different topological and anatomical structure. Mesh-based inverse kinematics extends the idea of traditional skeleton-based inverse kinematics to meshes by allowing the user to pose a mesh via direct manipulation. The user indicates the dass of meaningful deformations by supplying examples that can be created automatically with deformation transfer, sculpted, scanned, or produced by any other means. This technique is distinguished from traditional animation methods since it avoids the expensive character setup stage. It is distinguished from existing mesh editing algorithms since the user retains the freedom to specify the class of meaningful deformations. Results demonstrate an intuitive interface for posing meshes that requires only a small amount of user effort.by Robert Walker Sumner.Ph.D

    Facial Modelling and animation trends in the new millennium : a survey

    Get PDF
    M.Sc (Computer Science)Facial modelling and animation is considered one of the most challenging areas in the animation world. Since Parke and Waters’s (1996) comprehensive book, no major work encompassing the entire field of facial animation has been published. This thesis covers Parke and Waters’s work, while also providing a survey of the developments in the field since 1996. The thesis describes, analyses, and compares (where applicable) the existing techniques and practices used to produce the facial animation. Where applicable, the related techniques are grouped in the same chapter and described in a chronological fashion, outlining their differences, as well as their advantages and disadvantages. The thesis is concluded by exploratory work towards a talking head for Northern Sotho. Facial animation and lip synchronisation of a fragment of Northern Sotho is done by using software tools primarily designed for English.Computin

    Emotional avatars

    Get PDF

    Description-based visualisation of ethnic facial types

    Get PDF
    This study reports on the design and evaluation of a tool to assist in the description and visualisation of the human face and variations in facial shape and proportions characteristic of different ethnicities. A comprehensive set of local shape features (sulci, folds, prominences, slopes, fossae, etc.) which constitute a visually-discernible ‘vocabulary’ for facial description. Each such feature has one or more continuous-valued attributes, some of which are dimensional and correspond directly to conventional anthropometric distance measurements between facial landmarks, while other attributes capture the shape or topography of that given feature. These attributes, distributed over six facial regions (eyes, nose, etc.), control a morphable model of facial shape that can approximate individual faces as well as the averaged faces of various ethnotypes. Clues to ethnic origin are often more effectively conveyed by shape attributes than through differences in anthropometric measurements due to large individual differences in facial dimensions within each ethnicity. Individual faces of representative ethnicities (European, East Asian, etc.) can then be modelled to establish the range of variation of the attributes (each represented by a corresponding three-dimensional ‘basis shape’). These attributes are designed to be quasi-orthogonal, in that the model can assume attribute values in arbitrary combination with minimal undesired interaction. They thus can serve as the basis of a set of dimensions or degrees of freedom. The space of variation in facial shape defines an ethnicity face space (EFS), suitable for the human appreciation of facial variation across ethnicities, in contrast to a conventional identity face space (IFS) intended for automated detection of individual faces out of a sample set of faces from a single, homogeneous population. The dimensions comprising an IFS are based on holistic measurements and are usually not interpretable in terms of local facial dimensions or shape (i.e., they are not ‘semantic’). In contrast, for an EFS to facilitate our understanding of ethnic variation across faces (as opposed to ethnicity recognition) the underlying dimensions should correspond to visibly-discernible attributes. A shift from quantitative landmark-based anthropometric comparisons to local shape comparisons is demonstrated. Ethnic variation can be visually appreciated by observing the changes in a model through animation. These changes can be tracked at different levels of complexity: across the whole face, by selected facial region, by isolated feature, and by isolated attribute of a given feature. This study demonstrates that an intuitive feature set, derived by artistically-informed visual observation, can provide a workable descriptive basis. While neither mathematically-complete nor strictly orthogonal, the feature space permits close surface fits between the morphable model and face scan data. This study is intended for the human visual appreciation of facial shape, the characteristics of differing ethnicities, and the quantification of those differences. It presumes a basic understanding of the standard practices in digital facial animation
    corecore