46 research outputs found
MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
In this work we propose a novel model-based deep convolutional autoencoder
that addresses the highly challenging problem of reconstructing a 3D human face
from a single in-the-wild color image. To this end, we combine a convolutional
encoder network with an expert-designed generative model that serves as
decoder. The core innovation is our new differentiable parametric decoder that
encapsulates image formation analytically based on a generative model. Our
decoder takes as input a code vector with exactly defined semantic meaning that
encodes detailed face pose, shape, expression, skin reflectance and scene
illumination. Due to this new way of combining CNN-based with model-based face
reconstruction, the CNN-based encoder learns to extract semantically meaningful
parameters from a single monocular input image. For the first time, a CNN
encoder and an expert-designed generative model can be trained end-to-end in an
unsupervised manner, which renders training on very large (unlabeled) real
world data feasible. The obtained reconstructions compare favorably to current
state-of-the-art approaches in terms of quality and richness of representation.Comment: International Conference on Computer Vision (ICCV) 2017 (Oral), 13
page
FlowCam: Training Generalizable 3D Radiance Fields without Camera Poses via Pixel-Aligned Scene Flow
Reconstruction of 3D neural fields from posed images has emerged as a
promising method for self-supervised representation learning. The key challenge
preventing the deployment of these 3D scene learners on large-scale video data
is their dependence on precise camera poses from structure-from-motion, which
is prohibitively expensive to run at scale. We propose a method that jointly
reconstructs camera poses and 3D neural scene representations online and in a
single forward pass. We estimate poses by first lifting frame-to-frame optical
flow to 3D scene flow via differentiable rendering, preserving locality and
shift-equivariance of the image processing backbone. SE(3) camera pose
estimation is then performed via a weighted least-squares fit to the scene flow
field. This formulation enables us to jointly supervise pose estimation and a
generalizable neural scene representation via re-rendering the input video, and
thus, train end-to-end and fully self-supervised on real-world video datasets.
We demonstrate that our method performs robustly on diverse, real-world video,
notably on sequences traditionally challenging to optimization-based pose
estimation techniques.Comment: Project website: http://cameronosmith.github.io/flowca
Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
The reconstruction of dense 3D models of face geometry and appearance from a
single image is highly challenging and ill-posed. To constrain the problem,
many approaches rely on strong priors, such as parametric face models learned
from limited 3D scan data. However, prior models restrict generalization of the
true diversity in facial geometry, skin reflectance and illumination. To
alleviate this problem, we present the first approach that jointly learns 1) a
regressor for face shape, expression, reflectance and illumination on the basis
of 2) a concurrently learned parametric face model. Our multi-level face model
combines the advantage of 3D Morphable Models for regularization with the
out-of-space generalization of a learned corrective space. We train end-to-end
on in-the-wild images without dense annotations by fusing a convolutional
encoder with a differentiable expert-designed renderer and a self-supervised
training loss, both defined at multiple detail levels. Our approach compares
favorably to the state-of-the-art in terms of reconstruction quality, better
generalizes to real world faces, and runs at over 250 Hz.Comment: CVPR 2018 (Oral). Project webpage:
https://gvv.mpi-inf.mpg.de/projects/FML
FML: Face Model Learning from Videos
Monocular image-based 3D reconstruction of faces is a long-standing problem
in computer vision. Since image data is a 2D projection of a 3D face, the
resulting depth ambiguity makes the problem ill-posed. Most existing methods
rely on data-driven priors that are built from limited 3D face scans. In
contrast, we propose multi-frame video-based self-supervised training of a deep
network that (i) learns a face identity model both in shape and appearance
while (ii) jointly learning to reconstruct 3D faces. Our face model is learned
using only corpora of in-the-wild video clips collected from the Internet. This
virtually endless source of training data enables learning of a highly general
3D face model. In order to achieve this, we propose a novel multi-frame
consistency loss that ensures consistent shape and appearance across multiple
frames of a subject's face, thus minimizing depth ambiguity. At test time we
can use an arbitrary number of frames, so that we can perform both monocular as
well as multi-frame reconstruction.Comment: CVPR 2019 (Oral). Video: https://www.youtube.com/watch?v=SG2BwxCw0lQ,
Project Page: https://gvv.mpi-inf.mpg.de/projects/FML19
PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations
Implicit surface representations, such as signed-distance functions, combined
with deep learning have led to impressive models which can represent detailed
shapes of objects with arbitrary topology. Since a continuous function is
learned, the reconstructions can also be extracted at any arbitrary resolution.
However, large datasets such as ShapeNet are required to train such models. In
this paper, we present a new mid-level patch-based surface representation. At
the level of patches, objects across different categories share similarities,
which leads to more generalizable models. We then introduce a novel method to
learn this patch-based representation in a canonical space, such that it is as
object-agnostic as possible. We show that our representation trained on one
category of objects from ShapeNet can also well represent detailed shapes from
any other category. In addition, it can be trained using much fewer shapes,
compared to existing approaches. We show several applications of our new
representation, including shape interpolation and partial point cloud
completion. Due to explicit control over positions, orientations and scales of
patches, our representation is also more controllable compared to object-level
representations, which enables us to deform encoded shapes non-rigidly.Comment: 25 pages, including supplementary material. Code:
https://github.com/edgar-tr/patchnets Project page:
https://gvv.mpi-inf.mpg.de/projects/PatchNets
Monocular Real-time Full Body Capture with Inter-part Correlations
We present the first method for real-time full body capture that estimates
shape and motion of body and hands together with a dynamic 3D face model from a
single color image. Our approach uses a new neural network architecture that
exploits correlations between body and hands at high computational efficiency.
Unlike previous works, our approach is jointly trained on multiple datasets
focusing on hand, body or face separately, without requiring data where all the
parts are annotated at the same time, which is much more difficult to create at
sufficient variety. The possibility of such multi-dataset training enables
superior generalization ability. In contrast to earlier monocular full body
methods, our approach captures more expressive 3D face geometry and color by
estimating the shape, expression, albedo and illumination parameters of a
statistical face model. Our method achieves competitive accuracy on public
benchmarks, while being significantly faster and providing more complete face
reconstructions
InverseFaceNet: Deep Monocular Inverse Face Rendering
We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image. By estimating all parameters from just a single image, advanced editing possibilities on a single face image, such as appearance editing and relighting, become feasible in real time. Most previous learning-based face reconstruction approaches do not jointly recover all dimensions, or are severely limited in terms of visual quality. In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created training corpus. Our approach builds on a novel loss function that measures model-space similarity directly in parameter space and significantly improves reconstruction accuracy. We further propose a self-supervised bootstrapping process in the network training loop, which iteratively updates the synthetic training corpus to better reflect the distribution of real-world imagery. We demonstrate that this strategy outperforms completely synthetically trained networks. Finally, we show high-quality reconstructions and compare our approach to several state-of-the-art approaches