1,276 research outputs found
Learning to Dress {3D} People in Generative Clothing
Three-dimensional human body models are widely used in the analysis of human
pose and motion. Existing models, however, are learned from minimally-clothed
3D scans and thus do not generalize to the complexity of dressed people in
common images and videos. Additionally, current models lack the expressive
power needed to represent the complex non-linear geometry of pose-dependent
clothing shapes. To address this, we learn a generative 3D mesh model of
clothed people from 3D scans with varying pose and clothing. Specifically, we
train a conditional Mesh-VAE-GAN to learn the clothing deformation from the
SMPL body model, making clothing an additional term in SMPL. Our model is
conditioned on both pose and clothing type, giving the ability to draw samples
of clothing to dress different body shapes in a variety of styles and poses. To
preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to
3D meshes. Our model, named CAPE, represents global shape and fine local
structure, effectively extending the SMPL body model to clothing. To our
knowledge, this is the first generative model that directly dresses 3D human
body meshes and generalizes to different poses. The model, code and data are
available for research purposes at https://cape.is.tue.mpg.de.Comment: CVPR-2020 camera ready. Code and data are available at
https://cape.is.tue.mpg.d
Deep Detail Enhancement for Any Garment
Creating fine garment details requires significant efforts and huge
computational resources. In contrast, a coarse shape may be easy to acquire in
many scenarios (e.g., via low-resolution physically-based simulation, linear
blend skinning driven by skeletal motion, portable scanners). In this paper, we
show how to enhance, in a data-driven manner, rich yet plausible details
starting from a coarse garment geometry. Once the parameterization of the
garment is given, we formulate the task as a style transfer problem over the
space of associated normal maps. In order to facilitate generalization across
garment types and character motions, we introduce a patch-based formulation,
that produces high-resolution details by matching a Gram matrix based style
loss, to hallucinate geometric details (i.e., wrinkle density and shape). We
extensively evaluate our method on a variety of production scenarios and show
that our method is simple, light-weight, efficient, and generalizes across
underlying garment types, sewing patterns, and body motion.Comment: 12 page
Motion Guided Deep Dynamic 3D Garments
Realistic dynamic garments on animated characters have many AR/VR
applications. While authoring such dynamic garment geometry is still a
challenging task, data-driven simulation provides an attractive alternative,
especially if it can be controlled simply using the motion of the underlying
character. In this work, we focus on motion guided dynamic 3D garments,
especially for loose garments. In a data-driven setup, we first learn a
generative space of plausible garment geometries. Then, we learn a mapping to
this space to capture the motion dependent dynamic deformations, conditioned on
the previous state of the garment as well as its relative position with respect
to the underlying body. Technically, we model garment dynamics, driven using
the input character motion, by predicting per-frame local displacements in a
canonical state of the garment that is enriched with frame-dependent skinning
weights to bring the garment to the global space. We resolve any remaining
per-frame collisions by predicting residual local displacements. The resultant
garment geometry is used as history to enable iterative rollout prediction. We
demonstrate plausible generalization to unseen body shapes and motion inputs,
and show improvements over multiple state-of-the-art alternatives.Comment: 11 page
PERGAMO: Personalized 3D Garments from Monocular Video
Clothing plays a fundamental role in digital humans. Current approaches to
animate 3D garments are mostly based on realistic physics simulation, however,
they typically suffer from two main issues: high computational run-time cost,
which hinders their development; and simulation-to-real gap, which impedes the
synthesis of specific real-world cloth samples. To circumvent both issues we
propose PERGAMO, a data-driven approach to learn a deformable model for 3D
garments from monocular images. To this end, we first introduce a novel method
to reconstruct the 3D geometry of garments from a single image, and use it to
build a dataset of clothing from monocular videos. We use these 3D
reconstructions to train a regression model that accurately predicts how the
garment deforms as a function of the underlying body pose. We show that our
method is capable of producing garment animations that match the real-world
behaviour, and generalizes to unseen body motions extracted from motion capture
dataset.Comment: Published at Computer Graphics Forum (Proc. of ACM/SIGGRAPH SCA),
2022. Project website http://mslab.es/projects/PERGAMO
Tex2Shape: Detailed Full Human Body Geometry From a Single Image
We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method
Tex2Shape: Detailed Full Human Body Geometry From a Single Image
We present a simple yet effective method to infer detailed full human body
shape from only a single photograph. Our model can infer full-body shape
including face, hair, and clothing including wrinkles at interactive
frame-rates. Results feature details even on parts that are occluded in the
input image. Our main idea is to turn shape regression into an aligned
image-to-image translation problem. The input to our method is a partial
texture map of the visible region obtained from off-the-shelf methods. From a
partial texture, we estimate detailed normal and vector displacement maps,
which can be applied to a low-resolution smooth body model to add detail and
clothing. Despite being trained purely with synthetic data, our model
generalizes well to real-world photographs. Numerous results demonstrate the
versatility and robustness of our method
- …