3,543 research outputs found
Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing
Despite recent progress in developing animatable full-body avatars, realistic
modeling of clothing - one of the core aspects of human self-expression -
remains an open challenge. State-of-the-art physical simulation methods can
generate realistically behaving clothing geometry at interactive rates.
Modeling photorealistic appearance, however, usually requires physically-based
rendering which is too expensive for interactive applications. On the other
hand, data-driven deep appearance models are capable of efficiently producing
realistic appearance, but struggle at synthesizing geometry of highly dynamic
clothing and handling challenging body-clothing configurations. To this end, we
introduce pose-driven avatars with explicit modeling of clothing that exhibit
both photorealistic appearance learned from real-world data and realistic
clothing dynamics. The key idea is to introduce a neural clothing appearance
model that operates on top of explicit geometry: at training time we use
high-fidelity tracking, whereas at animation time we rely on physically
simulated geometry. Our core contribution is a physically-inspired appearance
network, capable of generating photorealistic appearance with view-dependent
and dynamic shadowing effects even for unseen body-clothing configurations. We
conduct a thorough evaluation of our model and demonstrate diverse animation
results on several subjects and different types of clothing. Unlike previous
work on photorealistic full-body avatars, our approach can produce much richer
dynamics and more realistic deformations even for many examples of loose
clothing. We also demonstrate that our formulation naturally allows clothing to
be used with avatars of different people while staying fully animatable, thus
enabling, for the first time, photorealistic avatars with novel clothing.Comment: SIGGRAPH Asia 2022 (ACM ToG) camera ready. The supplementary video
can be found on
https://research.facebook.com/publications/dressing-avatars-deep-photorealistic-appearance-for-physically-simulated-clothing
Learning garment manipulation policies toward robot-assisted dressing.
Assistive robots have the potential to support people with disabilities in a variety of activities of daily living, such as dressing. People who have completely lost their upper limb movement functionality may benefit from robot-assisted dressing, which involves complex deformable garment manipulation. Here, we report a dressing pipeline intended for these people and experimentally validate it on a medical training manikin. The pipeline is composed of the robot grasping a hospital gown hung on a rail, fully unfolding the gown, navigating around a bed, and lifting up the user's arms in sequence to finally dress the user. To automate this pipeline, we address two fundamental challenges: first, learning manipulation policies to bring the garment from an uncertain state into a configuration that facilitates robust dressing; second, transferring the deformable object manipulation policies learned in simulation to real world to leverage cost-effective data generation. We tackle the first challenge by proposing an active pre-grasp manipulation approach that learns to isolate the garment grasping area before grasping. The approach combines prehensile and nonprehensile actions and thus alleviates grasping-only behavioral uncertainties. For the second challenge, we bridge the sim-to-real gap of deformable object policy transfer by approximating the simulator to real-world garment physics. A contrastive neural network is introduced to compare pairs of real and simulated garment observations, measure their physical similarity, and account for simulator parameters inaccuracies. The proposed method enables a dual-arm robot to put back-opening hospital gowns onto a medical manikin with a success rate of more than 90%
PhysXNet: a customizable approach for learning cloth dynamics on dressed people
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.We introduce PhysXNet, a learning-based approach to predict the dynamics of deformable clothes given 3D skeleton motion sequences of humans wearing these clothes. The proposed model is adaptable to a large variety of garments and changing topologies, without need of being retrained. Such simulations are typically carried out by physics engines that require manual human expertise and are subject to computationally intensive computations. PhysXNet, by contrast, is a fully differentiable deep network that at inference is able to estimate the geometry of dense cloth meshes in a matter of milliseconds, and thus, can be readily deployed as a layer of a larger deep learning architecture. This efficiency is achieved thanks to the specific parameterization of the clothes we consider, based on 3D UV maps encoding spatial garment displacements. The problem is then formulated as a mapping between the human kinematics space (represented also by 3D UV maps of the undressed body mesh) into the clothes displacement UV maps, which we learn using a conditional GAN with a discriminator that enforces feasible deformations. We train simultaneously our model for three garment templates, tops, bottoms and dresses for which we simulate deformations under 50 different human actions. Nevertheless, the UV map representation we consider allows encapsulating many different cloth topologies, and at test we can simulate garments even if we did not specifically train for them. A thorough evaluation demonstrates that PhysXNet delivers cloth deformations very close to those computed with the physical engine, opening the door to be effectively integrated within deep learning pipelines.Peer ReviewedPreprin
PERGAMO: Personalized 3D Garments from Monocular Video
Clothing plays a fundamental role in digital humans. Current approaches to
animate 3D garments are mostly based on realistic physics simulation, however,
they typically suffer from two main issues: high computational run-time cost,
which hinders their development; and simulation-to-real gap, which impedes the
synthesis of specific real-world cloth samples. To circumvent both issues we
propose PERGAMO, a data-driven approach to learn a deformable model for 3D
garments from monocular images. To this end, we first introduce a novel method
to reconstruct the 3D geometry of garments from a single image, and use it to
build a dataset of clothing from monocular videos. We use these 3D
reconstructions to train a regression model that accurately predicts how the
garment deforms as a function of the underlying body pose. We show that our
method is capable of producing garment animations that match the real-world
behaviour, and generalizes to unseen body motions extracted from motion capture
dataset.Comment: Published at Computer Graphics Forum (Proc. of ACM/SIGGRAPH SCA),
2022. Project website http://mslab.es/projects/PERGAMO
High-Quality Animatable Dynamic Garment Reconstruction from Monocular Videos
Much progress has been made in reconstructing garments from an image or a
video. However, none of existing works meet the expectations of digitizing
high-quality animatable dynamic garments that can be adjusted to various unseen
poses. In this paper, we propose the first method to recover high-quality
animatable dynamic garments from monocular videos without depending on scanned
data. To generate reasonable deformations for various unseen poses, we propose
a learnable garment deformation network that formulates the garment
reconstruction task as a pose-driven deformation problem. To alleviate the
ambiguity estimating 3D garments from monocular videos, we design a
multi-hypothesis deformation module that learns spatial representations of
multiple plausible deformations. Experimental results on several public
datasets demonstrate that our method can reconstruct high-quality dynamic
garments with coherent surface details, which can be easily animated under
unseen poses. The code will be provided for research purposes
- …