10,062 research outputs found
End-to-end 3D face reconstruction with deep neural networks
Monocular 3D facial shape reconstruction from a single 2D facial image has
been an active research area due to its wide applications. Inspired by the
success of deep neural networks (DNN), we propose a DNN-based approach for
End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different
from recent works that reconstruct and refine the 3D face in an iterative
manner using both an RGB image and an initial 3D facial shape rendering, our
DNN model is end-to-end, and thus the complicated 3D rendering process can be
avoided. Moreover, we integrate in the DNN architecture two components, namely
a multi-task loss function and a fusion convolutional neural network (CNN) to
improve facial expression reconstruction. With the multi-task loss function, 3D
face reconstruction is divided into neutral 3D facial shape reconstruction and
expressive 3D facial shape reconstruction. The neutral 3D facial shape is
class-specific. Therefore, higher layer features are useful. In comparison, the
expressive 3D facial shape favors lower or intermediate layer features. With
the fusion-CNN, features from different intermediate layers are fused and
transformed for predicting the 3D expressive facial shape. Through extensive
experiments, we demonstrate the superiority of our end-to-end framework in
improving the accuracy of 3D face reconstruction.Comment: Accepted to CVPR1
3D Face Reconstruction from Light Field Images: A Model-free Approach
Reconstructing 3D facial geometry from a single RGB image has recently
instigated wide research interest. However, it is still an ill-posed problem
and most methods rely on prior models hence undermining the accuracy of the
recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI)
obtained from light field cameras and learn CNN models that recover horizontal
and vertical 3D facial curves from the respective horizontal and vertical EPIs.
Our 3D face reconstruction network (FaceLFnet) comprises a densely connected
architecture to learn accurate 3D facial curves from low resolution EPIs. To
train the proposed FaceLFnets from scratch, we synthesize photo-realistic light
field images from 3D facial scans. The curve by curve 3D face estimation
approach allows the networks to learn from only 14K images of 80 identities,
which still comprises over 11 Million EPIs/curves. The estimated facial curves
are merged into a single pointcloud to which a surface is fitted to get the
final 3D face. Our method is model-free, requires only a few training samples
to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single
light field images under varying poses, expressions and lighting conditions.
Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces
reconstruction errors by over 20% compared to recent state of the art
- …