93 research outputs found
HeadOn: Real-time Reenactment of Human Portrait Videos
We propose HeadOn, the first real-time source-to-target reenactment approach
for complete human portrait videos that enables transfer of torso and head
motion, face expression, and eye gaze. Given a short RGB-D video of the target
actor, we automatically construct a personalized geometry proxy that embeds a
parametric head, eye, and kinematic torso model. A novel real-time reenactment
algorithm employs this proxy to photo-realistically map the captured motion
from the source actor to the target actor. On top of the coarse geometric
proxy, we propose a video-based rendering technique that composites the
modified target portrait video via view- and pose-dependent texturing, and
creates photo-realistic imagery of the target actor under novel torso and head
poses, facial expressions, and gaze directions. To this end, we propose a
robust tracking of the face and torso of the source actor. We extensively
evaluate our approach and show significant improvements in enabling much
greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at
Siggraph'1
Capture, Learning, and Synthesis of 3D Speaking Styles
Audio-driven 3D facial animation has been widely explored, but achieving
realistic, human-like performance is still unsolved. This is due to the lack of
available 3D datasets, models, and standard evaluation metrics. To address
this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans
captured at 60 fps and synchronized audio from 12 speakers. We then train a
neural network on our dataset that factors identity from facial motion. The
learned model, VOCA (Voice Operated Character Animation) takes any speech
signal as input - even speech in languages other than English - and
realistically animates a wide range of adult faces. Conditioning on subject
labels during training allows the model to learn a variety of realistic
speaking styles. VOCA also provides animator controls to alter speaking style,
identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball
rotations) during animation. To our knowledge, VOCA is the only realistic 3D
facial animation model that is readily applicable to unseen subjects without
retargeting. This makes VOCA suitable for tasks like in-game video, virtual
reality avatars, or any scenario in which the speaker, speech, or language is
not known in advance. We make the dataset and model available for research
purposes at http://voca.is.tue.mpg.de.Comment: To appear in CVPR 201
Dynamic Neural Portraits
We present Dynamic Neural Portraits, a novel approach to the problem of
full-head reenactment. Our method generates photo-realistic video portraits by
explicitly controlling head pose, facial expressions and eye gaze. Our proposed
architecture is different from existing methods that rely on GAN-based
image-to-image translation networks for transforming renderings of 3D faces
into photo-realistic images. Instead, we build our system upon a 2D
coordinate-based MLP with controllable dynamics. Our intuition to adopt a
2D-based representation, as opposed to recent 3D NeRF-like systems, stems from
the fact that video portraits are captured by monocular stationary cameras,
therefore, only a single viewpoint of the scene is available. Primarily, we
condition our generative model on expression blendshapes, nonetheless, we show
that our system can be successfully driven by audio features as well. Our
experiments demonstrate that the proposed method is 270 times faster than
recent NeRF-based reenactment methods, with our networks achieving speeds of 24
fps for resolutions up to 1024 x 1024, while outperforming prior works in terms
of visual quality.Comment: In IEEE/CVF Winter Conference on Applications of Computer Vision
(WACV) 202
3D Human Face Reconstruction and 2D Appearance Synthesis
3D human face reconstruction has been an extensive research for decades due to its wide applications, such as animation, recognition and 3D-driven appearance synthesis. Although commodity depth sensors are widely available in recent years, image based face reconstruction are significantly valuable as images are much easier to access and store.
In this dissertation, we first propose three image-based face reconstruction approaches according to different assumption of inputs.
In the first approach, face geometry is extracted from multiple key frames of a video sequence with different head poses. The camera should be calibrated under this assumption.
As the first approach is limited to videos, we propose the second approach then focus on single image. This approach also improves the geometry by adding fine grains using shading cue. We proposed a novel albedo estimation and linear optimization algorithm in this approach.
In the third approach, we further loose the constraint of the input image to arbitrary in the wild images. Our proposed approach can robustly reconstruct high quality model even with extreme expressions and large poses.
We then explore the applicability of our face reconstructions on four interesting applications: video face beautification, generating personalized facial blendshape from image sequences, face video stylizing and video face replacement. We demonstrate great potentials of our reconstruction approaches on these real-world applications. In particular, with the recent surge of interests in VR/AR, it is increasingly common to see people wearing head-mounted displays. However, the large occlusion on face is a big obstacle for people to communicate in a face-to-face manner. Our another application is that we explore hardware/software solutions for synthesizing the face image with presence of HMDs. We design two setups (experimental and mobile) which integrate two near IR cameras and one color camera to solve this problem. With our algorithm and prototype, we can achieve photo-realistic results.
We further propose a deep neutral network to solve the HMD removal problem considering it as a face inpainting problem. This approach doesn\u27t need special hardware and run in real-time with satisfying results
CVTHead: One-shot Controllable Head Avatar with Vertex-feature Transformer
Reconstructing personalized animatable head avatars has significant
implications in the fields of AR/VR. Existing methods for achieving explicit
face control of 3D Morphable Models (3DMM) typically rely on multi-view images
or videos of a single subject, making the reconstruction process complex.
Additionally, the traditional rendering pipeline is time-consuming, limiting
real-time animation possibilities. In this paper, we introduce CVTHead, a novel
approach that generates controllable neural head avatars from a single
reference image using point-based neural rendering. CVTHead considers the
sparse vertices of mesh as the point set and employs the proposed
Vertex-feature Transformer to learn local feature descriptors for each vertex.
This enables the modeling of long-range dependencies among all the vertices.
Experimental results on the VoxCeleb dataset demonstrate that CVTHead achieves
comparable performance to state-of-the-art graphics-based methods. Moreover, it
enables efficient rendering of novel human heads with various expressions, head
poses, and camera views. These attributes can be explicitly controlled using
the coefficients of 3DMMs, facilitating versatile and realistic animation in
real-time scenarios.Comment: WACV202
- …