844 research outputs found
Towards a complete 3D morphable model of the human head
Three-dimensional Morphable Models (3DMMs) are powerful statistical tools for
representing the 3D shapes and textures of an object class. Here we present the
most complete 3DMM of the human head to date that includes face, cranium, ears,
eyes, teeth and tongue. To achieve this, we propose two methods for combining
existing 3DMMs of different overlapping head parts: i. use a regressor to
complete missing parts of one model using the other, ii. use the Gaussian
Process framework to blend covariance matrices from multiple models. Thus we
build a new combined face-and-head shape model that blends the variability and
facial detail of an existing face model (the LSFM) with the full head modelling
capability of an existing head model (the LYHM). Then we construct and fuse a
highly-detailed ear model to extend the variation of the ear shape. Eye and eye
region models are incorporated into the head model, along with basic models of
the teeth, tongue and inner mouth cavity. The new model achieves
state-of-the-art performance. We use our model to reconstruct full head
representations from single, unconstrained images allowing us to parameterize
craniofacial shape and texture, along with the ear shape, eye gaze and eye
color.Comment: 18 pages, 18 figures, submitted to Transactions on Pattern Analysis
and Machine Intelligence (TPAMI) on the 9th of October as an extension paper
of the original oral CVPR paper : arXiv:1903.0378
A 3D Face Modelling Approach for Pose-Invariant Face Recognition in a Human-Robot Environment
Face analysis techniques have become a crucial component of human-machine
interaction in the fields of assistive and humanoid robotics. However, the
variations in head-pose that arise naturally in these environments are still a
great challenge. In this paper, we present a real-time capable 3D face
modelling framework for 2D in-the-wild images that is applicable for robotics.
The fitting of the 3D Morphable Model is based exclusively on automatically
detected landmarks. After fitting, the face can be corrected in pose and
transformed back to a frontal 2D representation that is more suitable for face
recognition. We conduct face recognition experiments with non-frontal images
from the MUCT database and uncontrolled, in the wild images from the PaSC
database, the most challenging face recognition database to date, showing an
improved performance. Finally, we present our SCITOS G5 robot system, which
incorporates our framework as a means of image pre-processing for face
analysis
HeadOn: Real-time Reenactment of Human Portrait Videos
We propose HeadOn, the first real-time source-to-target reenactment approach
for complete human portrait videos that enables transfer of torso and head
motion, face expression, and eye gaze. Given a short RGB-D video of the target
actor, we automatically construct a personalized geometry proxy that embeds a
parametric head, eye, and kinematic torso model. A novel real-time reenactment
algorithm employs this proxy to photo-realistically map the captured motion
from the source actor to the target actor. On top of the coarse geometric
proxy, we propose a video-based rendering technique that composites the
modified target portrait video via view- and pose-dependent texturing, and
creates photo-realistic imagery of the target actor under novel torso and head
poses, facial expressions, and gaze directions. To this end, we propose a
robust tracking of the face and torso of the source actor. We extensively
evaluate our approach and show significant improvements in enabling much
greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at
Siggraph'1
GANHead: Towards Generative Animatable Neural Head Avatars
To bring digital avatars into people's lives, it is highly demanded to
efficiently generate complete, realistic, and animatable head avatars. This
task is challenging, and it is difficult for existing methods to satisfy all
the requirements at once. To achieve these goals, we propose GANHead
(Generative Animatable Neural Head Avatar), a novel generative head model that
takes advantages of both the fine-grained control over the explicit expression
parameters and the realistic rendering results of implicit representations.
Specifically, GANHead represents coarse geometry, fine-gained details and
texture via three networks in canonical space to obtain the ability to generate
complete and realistic head avatars. To achieve flexible animation, we define
the deformation filed by standard linear blend skinning (LBS), with the learned
continuous pose and expression bases and LBS weights. This allows the avatars
to be directly animated by FLAME parameters and generalize well to unseen poses
and expressions. Compared to state-of-the-art (SOTA) methods, GANHead achieves
superior performance on head avatar generation and raw scan fitting.Comment: Camera-ready for CVPR 2023. Project page:
https://wsj-sjtu.github.io/GANHead
Learning Neural Parametric Head Models
We propose a novel 3D morphable model for complete human heads based on hybrid neural fields. At the core of our model lies a neural parametric representation that disentangles identity and expressions in disjoint latent spaces. To this end, we capture a person's identity in a canonical space as a signed distance field (SDF), and model facial expressions with a neural deformation field. In addition, our representation achieves high-fidelity local detail by introducing an ensemble of local fields centered around facial anchor points. To facilitate generalization, we train our model on a newly-captured dataset of over 3700 head scans from 203 different identities using a custom high-end 3D scanning setup. Our dataset significantly exceeds comparable existing datasets, both with respect to quality and completeness of geometry, averaging around 3.5M mesh faces per scan 1 1 We will publicly release our dataset along with a public benchmark for both neural head avatar construction as well as an evaluation on a hidden test-set for inference-time fitting.. Finally, we demonstrate that our approach outperforms state-of-the-art methods in terms of fitting error and reconstruction quality
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image
To facilitate the analysis of human actions, interactions and emotions, we
compute a 3D model of human body pose, hand pose, and facial expression from a
single monocular image. To achieve this, we use thousands of 3D scans to train
a new, unified, 3D model of the human body, SMPL-X, that extends SMPL with
fully articulated hands and an expressive face. Learning to regress the
parameters of SMPL-X directly from images is challenging without paired images
and 3D ground truth. Consequently, we follow the approach of SMPLify, which
estimates 2D features and then optimizes model parameters to fit the features.
We improve on SMPLify in several significant ways: (1) we detect 2D features
corresponding to the face, hands, and feet and fit the full SMPL-X model to
these; (2) we train a new neural network pose prior using a large MoCap
dataset; (3) we define a new interpenetration penalty that is both fast and
accurate; (4) we automatically detect gender and the appropriate body models
(male, female, or neutral); (5) our PyTorch implementation achieves a speedup
of more than 8x over Chumpy. We use the new method, SMPLify-X, to fit SMPL-X to
both controlled images and images in the wild. We evaluate 3D accuracy on a new
curated dataset comprising 100 images with pseudo ground-truth. This is a step
towards automatic expressive human capture from monocular RGB data. The models,
code, and data are available for research purposes at
https://smpl-x.is.tue.mpg.de.Comment: To appear in CVPR 201
- …