4 research outputs found

    Auto Lip-Sync Pada Karakter Virtual 3 Dimensi Menggunakan Blendshape

    Get PDF
    Proses pembuatan karakter virtual 3D yang dapat berbicara seperti manusia merupakan tantangan tersendiri bagi animator. Problematika yang muncul adalah dibutuhkan waktu lama dalam proses pengerjaan serta kompleksitas dari berbagai macam fonem penyusun kalimat. Teknik auto lip-sync digunakan untuk melakukan pembentukan karakter virtual 3D yang dapat berbicara seperti manusia pada umumnya. Preston blair phoneme series dijadikan acuan sebagai pembentukan viseme dalam karakter. Proses pemecahan fonem dan sinkronisasi audio dalam software 3D menjadi tahapan akhir dalam proses pembentukan auto lip-sync dalam karakter virtual 3D. Auto Lip-Sync on 3D Virtual Character Using Blendshape. Process of making a 3D virtual character who can speak like humans is a challenge for the animators. The problem that arise is that it takes a long time in the process as well as the complexity of the various phonemes making up sentences. Auto lip-sync technique is used to make the formation of a 3D virtual character who can speak like humans in general. Preston Blair phoneme series used as the reference in forming viseme in character. The phonemes solving process and audio synchronization in 3D software becomes the final stage in the process of auto lip-sync in a 3D virtual character

    A Facial Expression Parameterization by Elastic Surface Model

    Get PDF
    We introduce a novel parameterization of facial expressions by using elastic surface model. The elastic surface model has been used as a deformation tool especially for nonrigid organic objects. The parameter of expressions is either retrieved from existing articulated face models or obtained indirectly by manipulating facial muscles. The obtained parameter can be applied on target face models dissimilar to the source model to create novel expressions. Due to the limited number of control points, the animation data created using the parameterization require less storage size without affecting the range of deformation it provides. The proposed method can be utilized in many ways: (1) creating a novel facial expression from scratch, (2) parameterizing existing articulation data, (3) parameterizing indirectly by muscle construction, and (4) providing a new animation data format which requires less storage

    Sintesis Ekspresi Wajah Realistik Berbasis Feature-Point Cluster Menggunakan Radial Basis Function

    Get PDF
    Meningkatnya permintaan produk animasi oleh rumah produksi dan stasiun televisi menuntut adanya perubahan yang signifikan di dalam proses produksi animasi. Penelitian animasi ekspresi pada wajah khususnya mengenai proses rigging dan pemindahan ekspresi semakin banyak. Pendekatan tradisional animasi ekspresi wajah sangat tergantung pada animator dalam pembuatan gerakan kunci dan rangkaian gerakan ekspresi wajah. Hal ini menyebabkan produksi animasi wajah untuk satu wajah tidak dapat digunakan ulang secara langsung untuk wajah lainnya karena kekhususannya tersebut. Oleh karena itu proses otomatisasi pembentukan area pembobotan pada model wajah 3D dengan pendekatan cluster berikut proses duplikasi gerak yang adaptif terhadap bentuk wajah untuk mempersingkat proses produksi animasi sangat penting. Prinsip animasi dipandang sebagai salah satu solusi dan panduan untuk pembuatan animasi gerak wajah yang ekspresif dan hidup. Sintesis ekspresi wajah realistik dapat dibuat dengan basis feature-point cluster menggunakan radial basis function. Otomatisasi pembentukan area gerak di wajah hasil proses clustering berdasarkan letak fitur titik dan proses retargeting menggunakan radial basis function untuk melakukan sintesis ekspresi wajah realistik merupakan kebaruan yang diangkat pada penelitian ini. Berdasarkan semua tahapan eksperimentasi yang dilakukan dapat disimpulkan bahwa sintesis ekspresi wajah realistik dengan basis feature-point cluster menggunakan radial basis function dapat diterapkan pada beragam model wajah 3D dan dapat secara adaptif peka terhadap bentuk wajah dari masing-masing model 3D yang memiliki jumlah fitur penanda yang sama. Hasil persepsi visual evaluasi penerapan sintesis ekspresi wajah realistik menunjukkan hasil ekspresi terkejut memiliki persentasi paling tinggi mudah dikenali, yaitu: 89,32%. Ekspresi senang: 84,63 %, ekspresi sedih: 77,32%, ekspresi marah: 76,64%, ekspresi jijik: 76,45%, serta ekspresi takut: 76,44%. Rerata persentase wajah mudah dikenali sebesar 80,13%. ================================================================================================================== The increasing demand of animated movies by production houses and television stations needs a significant change in the animation production process. Computer facial animation research on the process of rigging and expression transfer is growing. The traditional approach of facial animation is highly dependent on the animator in making the key and the sequence of facial expression movements. This causes the production of facial animation for one face can not be reused directly for the other face because of its uniqueness. Therefore, the process of automating the formation of weighted areas on 3D face model with cluster approach and adaptive motion transfer process to face shape is very important to shorten the production process of animation. The principle of animation is seen as one of the solutions and guidelines for the creation of animated facial expression expressively. The synthesis of realistic facial expression can be made on the basis of a feature-point cluster using a radial basis function. Automation process for formatting the motion area in the face by clustering process based on the location of the feature-point and retargeting process using radial basis function to perform synthesis of realistic facial expression is the novelty of this research. Based on all experimentation stages, it can be concluded that the synthesis of realistic facial expression based on a feature-point cluster using radial basis function can be applied to various 3D face models and can be adaptively sensitive to the facial shape of each 3D model which has the same number of marker features. The results of visual perception evaluation from the synthesis of realistic facial expression show that surprise expression has the highest percentage and easily recognizable, 89,32%. Happy expression: 84,63%, sad expression: 77,32%, angry expression: 76,64%, disgust expression: 76,45%, and a fear expression: 76,44%. The average percentage of faces is easily recognizable at 80,13%
    corecore