34 research outputs found

    Simulations with Particle Method

    Get PDF

    Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation

    Get PDF
    Authoring dynamic garment shapes for character animation on body motion is one of the fundamental steps in the CG industry. Established workflows are either time and labor consuming (i.e., manual editing on dense frames with controllers), or lack keyframe-level control (i.e., physically-based simulation). Not surprisingly, garment authoring remains a bottleneck in many production pipelines. Instead, we present a deep-learning-based approach for semi-automatic authoring of garment animation, wherein the user provides the desired garment shape in a selection of keyframes, while our system infers a latent representation for its motion-independent intrinsic parameters (e.g., gravity, cloth materials, etc.). Given new character motions, the latent representation allows to automatically generate a plausible garment animation at interactive rates. Having factored out character motion, the learned intrinsic garment space enables smooth transition between keyframes on a new motion sequence. Technically, we learn an intrinsic garment space with an motion-driven autoencoder network, where the encoder maps the garment shapes to the intrinsic space under the condition of body motions, while the decoder acts as a differentiable simulator to generate garment shapes according to changes in character body motion and intrinsic parameters. We evaluate our approach qualitatively and quantitatively on common garment types. Experiments demonstrate our system can significantly improve current garment authoring workflows via an interactive user interface. Compared with the standard CG pipeline, our system significantly reduces the ratio of required keyframes from 20% to 1 -- 2%

    ClothCombo: Modeling Inter-Cloth Interaction for Draping Multi-Layered Clothes

    Full text link
    We present ClothCombo, a pipeline to drape arbitrary combinations of clothes on 3D human models with varying body shapes and poses. While existing learning-based approaches for draping clothes have shown promising results, multi-layered clothing remains challenging as it is non-trivial to model inter-cloth interaction. To this end, our method utilizes a GNN-based network to efficiently model the interaction between clothes in different layers, thus enabling multi-layered clothing. Specifically, we first create feature embedding for each cloth using a topology-agnostic network. Then, the draping network deforms all clothes to fit the target body shape and pose without considering inter-cloth interaction. Lastly, the untangling network predicts the per-vertex displacements in a way that resolves interpenetration between clothes. In experiments, the proposed model demonstrates strong performance in complex multi-layered scenarios. Being agnostic to cloth topology, our method can be readily used for layered virtual try-on of real clothes in diverse poses and combinations of clothes

    Fast GPU-Based Two-Way Continuous Collision Handling

    Full text link
    Step-and-project is a popular way to simulate non-penetrated deformable bodies in physically-based animation. First integrating the system in time regardless of contacts and post resolving potential intersections practically strike a good balance between plausibility and efficiency. However, existing methods could be defective and unsafe when the time step is large, taking risks of failures or demands of repetitive collision testing and resolving that severely degrade performance. In this paper, we propose a novel two-way method for fast and reliable continuous collision handling. Our method launches the optimization at both ends of the intermediate time-integrated state and the previous intersection-free state, progressively generating a piecewise-linear path and finally reaching a feasible solution for the next time step. Technically, our method interleaves between a forward step and a backward step at a low cost, until the result is conditionally converged. Due to a set of unified volume-based contact constraints, our method can flexibly and reliably handle a variety of codimensional deformable bodies, including volumetric bodies, cloth, hair and sand. The experiments show that our method is safe, robust, physically faithful and numerically efficient, especially suitable for large deformations or large time steps
    corecore