1,165 research outputs found
Learning single-image 3D reconstruction by generative modelling of shape, pose and shading
We present a unified framework tackling two problems: class-specific 3D
reconstruction from a single image, and generation of new 3D shape samples.
These tasks have received considerable attention recently; however, most
existing approaches rely on 3D supervision, annotation of 2D images with
keypoints or poses, and/or training with multiple views of each object
instance. Our framework is very general: it can be trained in similar settings
to existing approaches, while also supporting weaker supervision. Importantly,
it can be trained purely from 2D images, without pose annotations, and with
only a single view per instance. We employ meshes as an output representation,
instead of voxels used in most prior work. This allows us to reason over
lighting parameters and exploit shading information during training, which
previous 2D-supervised methods cannot. Thus, our method can learn to generate
and reconstruct concave object classes. We evaluate our approach in various
settings, showing that: (i) it learns to disentangle shape from pose and
lighting; (ii) using shading in the loss improves performance compared to just
silhouettes; (iii) when using a standard single white light, our model
outperforms state-of-the-art 2D-supervised methods, both with and without pose
supervision, thanks to exploiting shading cues; (iv) performance improves
further when using multiple coloured lights, even approaching that of
state-of-the-art 3D-supervised methods; (v) shapes produced by our model
capture smooth surfaces and fine details better than voxel-based approaches;
and (vi) our approach supports concave classes such as bathtubs and sofas,
which methods based on silhouettes cannot learn.Comment: Extension of arXiv:1807.09259, accepted to IJCV. Differentiable
renderer available at https://github.com/pmh47/dir
Detail-preserving and Content-aware Variational Multi-view Stereo Reconstruction
Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view
images is a fundamental yet active research area in computer vision. Despite
the steady progress in multi-view stereo reconstruction, most existing methods
are still limited in recovering fine-scale details and sharp features while
suppressing noises, and may fail in reconstructing regions with few textures.
To address these limitations, this paper presents a Detail-preserving and
Content-aware Variational (DCV) multi-view stereo method, which reconstructs
the 3D surface by alternating between reprojection error minimization and mesh
denoising. In reprojection error minimization, we propose a novel inter-image
similarity measure, which is effective to preserve fine-scale details of the
reconstructed surface and builds a connection between guided image filtering
and image registration. In mesh denoising, we propose a content-aware
-minimization algorithm by adaptively estimating the value and
regularization parameters based on the current input. It is much more promising
in suppressing noise while preserving sharp features than conventional
isotropic mesh smoothing. Experimental results on benchmark datasets
demonstrate that our DCV method is capable of recovering more surface details,
and obtains cleaner and more accurate reconstructions than state-of-the-art
methods. In particular, our method achieves the best results among all
published methods on the Middlebury dino ring and dino sparse ring datasets in
terms of both completeness and accuracy.Comment: 14 pages,16 figures. Submitted to IEEE Transaction on image
processin
A Generative Model of People in Clothing
We present the first image-based generative model of people in clothing for
the full body. We sidestep the commonly used complex graphics rendering
pipeline and the need for high-quality 3D scans of dressed people. Instead, we
learn generative models from a large image database. The main challenge is to
cope with the high variance in human pose, shape and appearance. For this
reason, pure image-based approaches have not been considered so far. We show
that this challenge can be overcome by splitting the generating process in two
parts. First, we learn to generate a semantic segmentation of the body and
clothing. Second, we learn a conditional model on the resulting segments that
creates realistic images. The full model is differentiable and can be
conditioned on pose, shape or color. The result are samples of people in
different clothing items and styles. The proposed model can generate entirely
new people with realistic clothing. In several experiments we present
encouraging results that suggest an entirely data-driven approach to people
generation is possible
Learning to Dress {3D} People in Generative Clothing
Three-dimensional human body models are widely used in the analysis of human
pose and motion. Existing models, however, are learned from minimally-clothed
3D scans and thus do not generalize to the complexity of dressed people in
common images and videos. Additionally, current models lack the expressive
power needed to represent the complex non-linear geometry of pose-dependent
clothing shapes. To address this, we learn a generative 3D mesh model of
clothed people from 3D scans with varying pose and clothing. Specifically, we
train a conditional Mesh-VAE-GAN to learn the clothing deformation from the
SMPL body model, making clothing an additional term in SMPL. Our model is
conditioned on both pose and clothing type, giving the ability to draw samples
of clothing to dress different body shapes in a variety of styles and poses. To
preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to
3D meshes. Our model, named CAPE, represents global shape and fine local
structure, effectively extending the SMPL body model to clothing. To our
knowledge, this is the first generative model that directly dresses 3D human
body meshes and generalizes to different poses. The model, code and data are
available for research purposes at https://cape.is.tue.mpg.de.Comment: CVPR-2020 camera ready. Code and data are available at
https://cape.is.tue.mpg.d
MonoPerfCap: Human Performance Capture from Monocular Video
We present the first marker-less approach for temporally coherent 3D
performance capture of a human with general clothing from monocular video. Our
approach reconstructs articulated human skeleton motion as well as medium-scale
non-rigid surface deformations in general scenes. Human performance capture is
a challenging problem due to the large range of articulation, potentially fast
motion, and considerable non-rigid deformations, even from multi-view data.
Reconstruction from monocular video alone is drastically more challenging,
since strong occlusions and the inherent depth ambiguity lead to a highly
ill-posed reconstruction problem. We tackle these challenges by a novel
approach that employs sparse 2D and 3D human pose detections from a
convolutional neural network using a batch-based pose estimation strategy.
Joint recovery of per-batch motion allows to resolve the ambiguities of the
monocular reconstruction problem based on a low dimensional trajectory
subspace. In addition, we propose refinement of the surface geometry based on
fully automatically extracted silhouettes to enable medium-scale non-rigid
alignment. We demonstrate state-of-the-art performance capture results that
enable exciting applications such as video editing and free viewpoint video,
previously infeasible from monocular video. Our qualitative and quantitative
evaluation demonstrates that our approach significantly outperforms previous
monocular methods in terms of accuracy, robustness and scene complexity that
can be handled.Comment: Accepted to ACM TOG 2018, to be presented on SIGGRAPH 201
LiveCap: Real-time Human Performance Capture from Monocular Video
We present the first real-time human performance capture approach that
reconstructs dense, space-time coherent deforming geometry of entire humans in
general everyday clothing from just a single RGB video. We propose a novel
two-stage analysis-by-synthesis optimization whose formulation and
implementation are designed for high performance. In the first stage, a skinned
template model is jointly fitted to background subtracted input video, 2D and
3D skeleton joint positions found using a deep neural network, and a set of
sparse facial landmark detections. In the second stage, dense non-rigid 3D
deformations of skin and even loose apparel are captured based on a novel
real-time capable algorithm for non-rigid tracking using dense photometric and
silhouette constraints. Our novel energy formulation leverages automatically
identified material regions on the template to model the differing non-rigid
deformation behavior of skin and apparel. The two resulting non-linear
optimization problems per-frame are solved with specially-tailored
data-parallel Gauss-Newton solvers. In order to achieve real-time performance
of over 25Hz, we design a pipelined parallel architecture using the CPU and two
commodity GPUs. Our method is the first real-time monocular approach for
full-body performance capture. Our method yields comparable accuracy with
off-line performance capture techniques, while being orders of magnitude
faster
Adversarial Semantic Scene Completion from a Single Depth Image
We propose a method to reconstruct, complete and semantically label a 3D
scene from a single input depth image. We improve the accuracy of the regressed
semantic 3D maps by a novel architecture based on adversarial learning. In
particular, we suggest using multiple adversarial loss terms that not only
enforce realistic outputs with respect to the ground truth, but also an
effective embedding of the internal features. This is done by correlating the
latent features of the encoder working on partial 2.5D data with the latent
features extracted from a variational 3D auto-encoder trained to reconstruct
the complete semantic scene. In addition, differently from other approaches
that operate entirely through 3D convolutions, at test time we retain the
original 2.5D structure of the input during downsampling to improve the
effectiveness of the internal representation of our model. We test our approach
on the main benchmark datasets for semantic scene completion to qualitatively
and quantitatively assess the effectiveness of our proposal.Comment: 2018 International Conference on 3D Vision (3DV
- …