75 research outputs found
Learning to Transfer Texture from Clothing Images to 3D Humans
In this paper, we present a simple yet effective method to automatically
transfer textures of clothing images (front and back) to 3D garments worn on
top SMPL, in real time. We first automatically compute training pairs of images
with aligned 3D garments using a custom non-rigid 3D to 2D registration method,
which is accurate but slow. Using these pairs, we learn a mapping from pixels
to the 3D garment surface. Our idea is to learn dense correspondences from
garment image silhouettes to a 2D-UV map of a 3D garment surface using shape
information alone, completely ignoring texture, which allows us to generalize
to the wide range of web images. Several experiments demonstrate that our model
is more accurate than widely used baselines such as thin-plate-spline warping
and image-to-image translation networks while being orders of magnitude faster.
Our model opens the door for applications such as virtual try-on, and allows
for generation of 3D humans with varied textures which is necessary for
learning.Comment: IEEE Conference on Computer Vision and Pattern Recognitio
ECON: Explicit Clothed humans Optimized via Normal integration
The combination of artist-curated scans, and deep implicit functions (IF), is enabling the creation of detailed, clothed, 3D humans from images. However, existing methods are far from perfect. IF-based methods recover free-form geometry but produce disembodied limbs or degenerate shapes for unseen poses or clothes. To increase robustness for these cases, existing work uses an explicit parametric body model to constrain surface reconstruction, but this limits the recovery of free-form surfaces such as loose clothing that deviates from the body. What we want is a method that combines the best properties of implicit and explicit methods. To this end, we make two key observations:(1) current networks are better at inferring detailed 2D maps than full-3D surfaces, and (2) a parametric model can be seen as a “canvas” for stitching together detailed surface patches. ECON infers high-fidelity 3D humans even in loose clothes and challenging poses, while having realistic faces and fingers. This goes beyond previous methods. Quantitative, evaluation of the CAPE and Renderpeople datasets shows that ECON is more accurate than the state of the art. Perceptual studies also show that ECON’s perceived realism is better by a large margin
D-IF: Uncertainty-aware Human Digitization via Implicit Distribution Field
Realistic virtual humans play a crucial role in numerous industries, such as
metaverse, intelligent healthcare, and self-driving simulation. But creating
them on a large scale with high levels of realism remains a challenge. The
utilization of deep implicit function sparks a new era of image-based 3D
clothed human reconstruction, enabling pixel-aligned shape recovery with fine
details. Subsequently, the vast majority of works locate the surface by
regressing the deterministic implicit value for each point. However, should all
points be treated equally regardless of their proximity to the surface? In this
paper, we propose replacing the implicit value with an adaptive uncertainty
distribution, to differentiate between points based on their distance to the
surface. This simple ``value to distribution'' transition yields significant
improvements on nearly all the baselines. Furthermore, qualitative results
demonstrate that the models trained using our uncertainty distribution loss,
can capture more intricate wrinkles, and realistic limbs. Code and models are
available for research purposes at https://github.com/psyai-net/D-IF_release
- …