7 research outputs found

    ClothCombo: Modeling Inter-Cloth Interaction for Draping Multi-Layered Clothes

    Full text link
    We present ClothCombo, a pipeline to drape arbitrary combinations of clothes on 3D human models with varying body shapes and poses. While existing learning-based approaches for draping clothes have shown promising results, multi-layered clothing remains challenging as it is non-trivial to model inter-cloth interaction. To this end, our method utilizes a GNN-based network to efficiently model the interaction between clothes in different layers, thus enabling multi-layered clothing. Specifically, we first create feature embedding for each cloth using a topology-agnostic network. Then, the draping network deforms all clothes to fit the target body shape and pose without considering inter-cloth interaction. Lastly, the untangling network predicts the per-vertex displacements in a way that resolves interpenetration between clothes. In experiments, the proposed model demonstrates strong performance in complex multi-layered scenarios. Being agnostic to cloth topology, our method can be readily used for layered virtual try-on of real clothes in diverse poses and combinations of clothes

    Deep Detail Enhancement for Any Garment

    Get PDF
    Creating fine garment details requires significant efforts and huge computational resources. In contrast, a coarse shape may be easy to acquire in many scenarios (e.g., via low-resolution physically-based simulation, linear blend skinning driven by skeletal motion, portable scanners). In this paper, we show how to enhance, in a data-driven manner, rich yet plausible details starting from a coarse garment geometry. Once the parameterization of the garment is given, we formulate the task as a style transfer problem over the space of associated normal maps. In order to facilitate generalization across garment types and character motions, we introduce a patch-based formulation, that produces high-resolution details by matching a Gram matrix based style loss, to hallucinate geometric details (i.e., wrinkle density and shape). We extensively evaluate our method on a variety of production scenarios and show that our method is simple, light-weight, efficient, and generalizes across underlying garment types, sewing patterns, and body motion.Comment: 12 page

    Motion Guided Deep Dynamic 3D Garments

    Full text link
    Realistic dynamic garments on animated characters have many AR/VR applications. While authoring such dynamic garment geometry is still a challenging task, data-driven simulation provides an attractive alternative, especially if it can be controlled simply using the motion of the underlying character. In this work, we focus on motion guided dynamic 3D garments, especially for loose garments. In a data-driven setup, we first learn a generative space of plausible garment geometries. Then, we learn a mapping to this space to capture the motion dependent dynamic deformations, conditioned on the previous state of the garment as well as its relative position with respect to the underlying body. Technically, we model garment dynamics, driven using the input character motion, by predicting per-frame local displacements in a canonical state of the garment that is enriched with frame-dependent skinning weights to bring the garment to the global space. We resolve any remaining per-frame collisions by predicting residual local displacements. The resultant garment geometry is used as history to enable iterative rollout prediction. We demonstrate plausible generalization to unseen body shapes and motion inputs, and show improvements over multiple state-of-the-art alternatives.Comment: 11 page

    Unilaterally Incompressible Skinning

    Get PDF
    Skinning was initially devised for computing the skin of a character deformed through a skeleton; but it is now also commonly used for deforming tight garments at a very cheap cost. However, unlike skin which may easily compress and stretch, tight cloth strongly resists compression: inside bending regions such as the interior of an elbow, cloth does not shrink but instead buckles, causing interesting folds and wrinkles which are completely missed by skinning methods. Our goal is to extend traditional skinning in order to capture such folding patterns automatically, without sacrificing efficiency. The key of our model is to replace the usual skinning formula — derived from, e.g., Linear Blend Skinning or Dual Quaternions — with a complementarity constraint, making an automatic switch between, on the one hand, classical skinning in zones prone to stretching, and on the other hand, a quasi-isometric scheme in zones prone to compression. Moreover, our method provides some useful handles to the user for directing the type of folds created, such as the fold density or the overall shape of a given fold. Our results show that our method can generate similar complexity of folds compared to full cloth simulation, while retaining interactivity of skinning approaches and offering intuitive user control
    corecore