4,740 research outputs found

    Transformasi Ruang 3D Pada Animasi Expresi Wajah Avatar Berbasis Radial Basis Function

    Get PDF
    One technique for forming facial animations on avatars is by reusing existing animations, either from other avatar animations or animations from motion data obtained using facial motion capture. This research focuses on the transformation of 3D space for the formation of facial animations on avatars in games or animated films. The transformation is carried out from motion capture data into a 3D avatar face model with 3 face models, namely the human face model, the swan face model and the anoman face model. The motion capture data is transferred according to the feature points of the face model. The results obtained by the facial model feature points will have animations that match the motion capture data. Of the 3 target face models used, the animation results with registration on the human face model have an average standard deviation is 0,0510. The goose face model has an average standard deviation is 0.0034 and the anoman face model has an average standard deviation is 0,0024. With this technique, it is hoped that the formation of facial expression animation on Avatar can be done more quickly because of the reuse of facial motion capture data

    Real-time rendering facial skin colours to enhance realism of virtual human

    Get PDF
    The research on facial animation has grown very fast and become more realistic in term of 3D facial data since the laser scan and advance 3D tools can support for creating complex facial model. However, that approaches still lacking in term of facial expression based on emotional condition. Facial skin colour is one parameter that gives an effect to increase the realism of facial expression, since it’s closely related to the emotion which is happens inside the human. This research provides a new technique for facial animation to change the colour of facial skin for the avatar based on linear interpolation by referring to the previous works which are (Jung et al., 2009; Kyu-Ho and Tae-Yong, 2008; Nijdam, 2006), also describes facial animation and the emotion that is related to the facial skin changes like blushing, anger or even sadness. The result of colour generation is comparable to the real human expression; furthermore it’s also able to enhance the appearance of facial expression of the virtual human

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    CGAMES'2009

    Get PDF

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time

    Performance of grassed swale as stormwater quantity control in lowland area

    Get PDF
    Grassed swale is a vegetated open channel designed to attenuate stormwater through infiltration and conveying runoff into nearby water bodies, thus reduces peak flows and minimizes the causes of flood. UTHM is a flood-prone area due to located in lowland area, has high groundwater level and low infiltration rates. The aim of this study is to assess the performance of grassed swale as a stormwater quantity control in UTHM. Flow depths and velocities of swales were measured according to Six-Tenths Depth Method shortly after a rainfall event. Flow discharges of swales (Qswale) were evaluated by Mean- Section Method to determine the variations of Manning’s roughness coefficients (ncalculate) that results between 0.075 – 0.122 due to tall grass and irregularity of channels. Based on the values of Qswale between sections of swales, the percentages of flow attenuation are up to 54%. As for the flow conveyance of swales, Qswale were determined by Manning’s equation that divided into Qcalculate, evaluated using ncalculate, and Qdesign, evaluated using roughness coefficient recommended by MSMA (ndesign), to compare with flow discharges of drainage areas (Qpeak), evaluated by Rational Method with 10-year ARI. Each site of study has shown Qdesign is greater than Qpeak up to 59%. However, Qcalculate is greater than Qpeak only at a certain site of study up to 14%. The values of Qdesign also greater than Qcalculate up to 52% where it shows that the roughness coefficients as considered in MSMA are providing a better performance of swale. This study also found that the characteristics of the studied swales are comparable to the design consideration by MSMA. Based on these findings, grassed swale has the potential in collecting, attenuating, and conveying stormwater, which suitable to be applied as one of the best management practices in preventing flash flood at UTHM campus
    • …
    corecore