13,594 research outputs found
THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS
Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects.
A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results.
Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions.
A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant.
A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained
ICface: Interpretable and Controllable Face Reenactment Using GANs
This paper presents a generic face animator that is able to control the pose
and expressions of a given face image. The animation is driven by human
interpretable control signals consisting of head pose angles and the Action
Unit (AU) values. The control information can be obtained from multiple sources
including external driving videos and manual controls. Due to the interpretable
nature of the driving signal, one can easily mix the information between
multiple sources (e.g. pose from one image and expression from another) and
apply selective post-production editing. The proposed face animator is
implemented as a two-stage neural network model that is learned in a
self-supervised manner using a large video collection. The proposed
Interpretable and Controllable face reenactment network (ICface) is compared to
the state-of-the-art neural network-based face animation techniques in multiple
tasks. The results indicate that ICface produces better visual quality while
being more versatile than most of the comparison methods. The introduced model
could provide a lightweight and easy to use tool for a multitude of advanced
image and video editing tasks.Comment: Accepted in WACV-202
Facial Expression Simulator Control System using OpenGL Graphical Simulator
Verbal communication is communication that uses the language or voice interaction, whereas nonverbal communication is communication that interacts with one of these gestures is to show facial expressions. We propose of the implementation of controls based on the facial expressions of human face. Based on obtained the information expression then translated into a device simulator using the OpenGL graphics software as an indication tool and easy for analyzing the emotional character of a person through a computer device. In the implementation of the face was found that the mechanism of the humanoid robot head in nonverbal interaction has 8 DOF (Degree of Freedom) from the various combinations between the drive motor servo eyebrows, eyes, eyelids, and mouth in order to show the anger, disgust, happiness, surprise, sadness, and fear facial expression
Recommended from our members
Emphatic agents to reduce user frustration: The effects of varying agent characteristics
There is now growing interest in the development of computer systems which respond to users’ emotion and affect. We report three small scale studies (with a total of 42 participants) which investigate the extent to which affective agents, using strategies derived from human-human interaction, can reduce user frustration within human-computer interaction. The results confirm the previous findings of Klein et al (2002) that such interventions can be effective. We also obtained results that suggest that embodied agents can be more effective at reducing frustration than non-embodied agents, and that female embodied agents may be more effective than male embodied agents. These results are discussed in light of the existing research literature
HeadOn: Real-time Reenactment of Human Portrait Videos
We propose HeadOn, the first real-time source-to-target reenactment approach
for complete human portrait videos that enables transfer of torso and head
motion, face expression, and eye gaze. Given a short RGB-D video of the target
actor, we automatically construct a personalized geometry proxy that embeds a
parametric head, eye, and kinematic torso model. A novel real-time reenactment
algorithm employs this proxy to photo-realistically map the captured motion
from the source actor to the target actor. On top of the coarse geometric
proxy, we propose a video-based rendering technique that composites the
modified target portrait video via view- and pose-dependent texturing, and
creates photo-realistic imagery of the target actor under novel torso and head
poses, facial expressions, and gaze directions. To this end, we propose a
robust tracking of the face and torso of the source actor. We extensively
evaluate our approach and show significant improvements in enabling much
greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at
Siggraph'1
Linear Facial Expression Transfer With Active Appearance Models
The issue of transferring facial expressions from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, with the proliferation of online image and video collections and web applications, such as Google Street View, the question of preserving privacy through face de-identification has gained interest in the computer vision community. In this paper, we focus on the problem of real-time dynamic facial expression transfer using an Active Appearance Model framework. We provide a theoretical foundation for a generalisation of two well-known expression transfer methods and demonstrate the improved visual quality of the proposed linear extrapolation transfer method on examples of face swapping and expression transfer using the AVOZES data corpus. Realistic talking faces can be generated in real-time at low computational cost
Agents, Believability and Embodiment in Advanced Learning Environments
On the World Wide Web we see a growing number of general HCI interfaces, interfaces to educational or entertainment systems, interfaces to professional environments, etc., where an animated face, a cartoon character or a human-like virtual agent has the task to assist the user, to engage the user into a conversation or to educate the user. What can be said say about the effects a human-like agent has on a student's performance? We discuss agents, their intelligence, embodiment and interaction modalities. In particular, we introduce viewpoints and questions about roles embodied agents can play in educational environment
- …