1,440 research outputs found

    PBNS: physically based neural simulation for unsupervised garment pose space deformation

    Full text link
    We present a methodology to automatically obtain Pose Space Deformation (PSD) basis for rigged garments through deep learning. Classical approaches rely on Physically Based Simulations (PBS) to animate clothes. These are general solutions that, given a sufficiently fine-grained discretization of space and time, can achieve highly realistic results. However, they are computationally expensive and any scene modification prompts the need of re-simulation. Linear Blend Skinning (LBS) with PSD offers a lightweight alternative to PBS, though, it needs huge volumes of data to learn proper PSD. We propose using deep learning, formulated as an implicit PBS, to un-supervisedly learn realistic cloth Pose Space Deformations in a constrained scenario: dressed humans. Furthermore, we show it is possible to train these models in an amount of time comparable to a PBS of a few sequences. To the best of our knowledge, we are the first to propose a neural simulator for cloth. While deep-based approaches in the domain are becoming a trend, these are data-hungry models. Moreover, authors often propose complex formulations to better learn wrinkles from PBS data. Supervised learning leads to physically inconsistent predictions that require collision solving to be used. Also, dependency on PBS data limits the scalability of these solutions, while their formulation hinders its applicability and compatibility. By proposing an unsupervised methodology to learn PSD for LBS models (3D animation standard), we overcome both of these drawbacks. Results obtained show cloth-consistency in the animated garments and meaningful pose-dependant folds and wrinkles. Our solution is extremely efficient, handles multiple layers of cloth, allows unsupervised outfit resizing and can be easily applied to any custom 3D avatar

    ๋Œ€์นญ์ ์ธ ์˜์ƒ์˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๊ฐ€์†์„ ์œ„ํ•œ ํŒจํ„ด ๋ฏธ๋Ÿฌ๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2019. 2. ๊ณ ํ˜•์„.๋ณธ ๋…ผ๋ฌธ์€ ์˜์ƒ-๋ฐ”๋”” ์‹œ๋ฎฌ๋ ˆ์ด์…˜์˜ ์†๋„ ํ–ฅ์ƒ์„ ์œ„ํ•œ ํŒจํ„ด๋ฏธ๋Ÿฌ๋ง ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•œ๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ ๋ชธ ๋งค์‰ฌ์™€ ์˜ท์˜ ํŒจ๋„์ด ์œ„์น˜ํ•œ Y-Zํ‰๋ฉด์— ๋Œ€ํ•ด ๋Œ€์นญ์ผ ๊ฒฝ์šฐ์— ์‚ฌ์šฉ๊ฐ€๋Šฅํ•˜๋‹ค. ๋ณดํ†ต์˜ ๋‚จ์„ฑ๋ณต์ด๋‚˜ ๊ธฐ์„ฑ๋ณต๊ณผ ๊ฐ™์€ ์˜ท์ด ์ขŒ์šฐ๊ฐ€ ๋Œ€์นญ์ธ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹ค. ๊ธฐ์กด ์‹œ๋ฎฌ๋ ˆ์ด์…˜์—์„œ๋Š” ๋ชจ๋“  ์˜ท์˜ ์ •์ ๋“ค์— ๋Œ€ํ•ด conjugate gradient ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•ด ์‹œ์Šคํ…œ ํ–‰๋ ฌ์„ ํ’€์—ˆ๋‹ค. ๋ฌธ์ œ๋Š” conjugate gradient ๋ฐฉ๋ฒ•์€ ์ •์  ์ˆ˜์— ๋Œ€ํ•ด ์ง€์ˆ˜์ ์ธ ์‹œ๊ฐ„ ๋ณต์žก๋„๋ฅผ ๊ฐ€์ง€๋ฏ€๋กœ, ๊ณ ํ•ด์ƒ๋„๋ฅผ ์œ„ํ•ด ์ •์ ์˜ ์ˆ˜๊ฐ€ ์ฆ๊ฐ€ํ• ์ˆ˜๋ก ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์‹œ๊ฐ„์ด ์ง€์ˆ˜์ ์œผ๋กœ ์ฆ๊ฐ€ํ•œ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. Pattern Mirroring ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•˜๋ฉด ๊ณ„์‚ฐํ•ด์•ผํ•˜๋Š” ์‹œ์Šคํ…œ ๋ฐฉ์ •์‹์˜ ์–‘์ด ๋ฐ˜์ ˆ๋กœ ์ค„์–ด๋“ค๊ธฐ ๋•Œ๋ฌธ์—, ์‹œ๋ฎฌ๋ ˆ์ด์…˜์— ํ•„์š”ํ•œ ์‹œ๊ฐ„๋„ ์ค„์–ด๋“ค๊ฑฐ๋ผ๊ณ  ๊ธฐ๋Œ€ํ•  ์ˆ˜ ์žˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ํŒจํ„ด๋ฏธ๋Ÿฌ๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•˜๋ฉด 1.4๋ฐฐ (37%)์˜ ์†๋„ ํ–ฅ์ƒ์„ ๋ณด์˜€๋‹ค. 1์žฅ ๋„์ž…์—์„œ๋Š” ์˜ท์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•˜๋Š” ๊ณผ์ •์ธ ์‹œ์Šคํ…œ ๋ฐฉ์ •์‹์„ ํ‘ธ๋Š” ๋ฐฉ๋ฒ•, ์ถฉ๋Œ์ฒ˜๋ฆฌ๋ฅผ ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•œ๋‹ค. iterative method์ธ conjugate gradient๊ฐ€ ์˜ท์˜ ์ •์ ๋“ค์˜ ์†๋„๋ฅผ ๊ฒฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. 2์žฅ ๊ด€๋ จ ์—ฐ๊ตฌ์—์„œ๋Š” ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๊ฐ€์†ํ™”๋ฅผ ์œ„ํ•œ ์—ฐ๊ตฌ๋ฅผ ์†Œ๊ฐœํ•œ๋‹ค. 3์žฅ์—์„œ pattern mirroring ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์†Œ๊ฐœํ•œ๋‹ค. 4์žฅ์—์„œ๋Š” ํŒจํ„ด ๋ฏธ๋Ÿฌ๋ง ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•œ๋‹ค๋ฉด ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๋ฌธ์ œ๊ฐ€ ๋ช‡๊ฐ€์ง€ ์žˆ๋Š”๋ฐ, ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•œ๋‹ค. 5์žฅ์—์„œ๋Š” ํŒจํ„ด ๋ฏธ๋Ÿฌ๋ง ๋ฐฉ๋ฒ•์„ ๊ธฐ์กด์˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๋ฐฉ๋ฒ•๊ณผ ๋น„๊ตํ•ด์„œ ์†๋„ ํ–ฅ์ƒ์„ ๋„ํ‘œ๋กœ ์ œ์‹œํ•˜๊ณ , ๊ฒฐ๊ณผ ์ด๋ฏธ์ง€๋ฅผ ๋น„๊ตํ•œ๋‹ค.This paper describes the Pattern mirroring algorithm to reduce simulation time for cloth body simulation. This method is applicable for symmetric panel and symmetric body meshes centered on YZ plane: typically, man's suit and ready-make cloth is target of this method. As the ordinal simulation method, apply conjugate gradient method to every vertices on cloth mesh in order to solve system matrix. The problem is that the time for simulation is getting longer as the number of cloth vertices increases for high resolution. This is because the time complexity of conjugate gradient is exponential. Using pattern mirroring method, size of system matrix equation is half comparing ordinal method. So I can expect that the time for simulation reduces. The proposed method reduces simulation time up to 1.4 times (37%), by halving the matrix size of the linear equation. At chapter 1 introduction, describe the process of simulation, method of solving system equation and collision handling. An iterative method 'conjugate gradient method' is used to determine velocity of vertices of clothes. At chapter 2 relative work, explain about previous acceleration research for cloth simulation. At chapter 3, explain Pattern mirroring algorithm. But some problems could occur when using this method. At chapter 4, suggest solutions to handle these artifact as post-process step. At chapter 5, represent table to comparing the average time to simulate cloth in ordinal method and pattern mirroring method. Also represent image to difference of two result. Finally at chapter 6, describe conclusion and limitation of Pattern mirroring algorithm.Abstract Contents List of Figures List of Tables 1 Introduction 1.1 Time integration method 1.2 System matrix 1.3 Conjugate gradient method 1.4 Collision handling method 1.5 Overview of Pattern mirroring algorithm 2 Previous Work 3 Pattern Mirroring Method 3.1 1st step: Set Constraint Plane and Halving Mesh 3.2 2nd step: Simulation for Half Pane 3.3 3rd step: Mirroring Half Mesh 4 Artifacts Handling 4.1 Project crossed vertices at halving step 4.2 Penetration between original and mirrored mesh 5 Experiment Result 5.1 T-shirt 5.2 Jacket 6 Conclusion Bibliography ์ดˆ๋ก ๊ฐ์‚ฌ์˜๊ธ€Maste

    Real-time simulation and visualisation of cloth using edge-based adaptive meshes

    Get PDF
    Real-time rendering and the animation of realistic virtual environments and characters has progressed at a great pace, following advances in computer graphics hardware in the last decade. The role of cloth simulation is becoming ever more important in the quest to improve the realism of virtual environments. The real-time simulation of cloth and clothing is important for many applications such as virtual reality, crowd simulation, games and software for online clothes shopping. A large number of polygons are necessary to depict the highly exible nature of cloth with wrinkling and frequent changes in its curvature. In combination with the physical calculations which model the deformations, the effort required to simulate cloth in detail is very computationally expensive resulting in much diffculty for its realistic simulation at interactive frame rates. Real-time cloth simulations can lack quality and realism compared to their offline counterparts, since coarse meshes must often be employed for performance reasons. The focus of this thesis is to develop techniques to allow the real-time simulation of realistic cloth and clothing. Adaptive meshes have previously been developed to act as a bridge between low and high polygon meshes, aiming to adaptively exploit variations in the shape of the cloth. The mesh complexity is dynamically increased or refined to balance quality against computational cost during a simulation. A limitation of many approaches is they do not often consider the decimation or coarsening of previously refined areas, or otherwise are not fast enough for real-time applications. A novel edge-based adaptive mesh is developed for the fast incremental refinement and coarsening of a triangular mesh. A mass-spring network is integrated into the mesh permitting the real-time adaptive simulation of cloth, and techniques are developed for the simulation of clothing on an animated character

    RECREATING AND SIMULATING DIGITAL COSTUMES FROM A STAGE PRODUCTION OF \u3ci\u3eMEDEA\u3c/i\u3e

    Get PDF
    This thesis investigates a technique to effectively construct and simulate costumes from a stage production Medea, in a dynamic cloth simulation application like Maya\u27s nDynamics. This was done by using data collected from real-world fabric tests and costume construction in the theatre\u27s costume studio. Fabric tests were conducted and recorded, by testing costume fabrics for drape and behavior with two collision objects. These tests were recreated digitally in Maya to derive appropriate parameters for the digital fabric, by comparing with the original reference. Basic mannequin models were created using the actors\u27 measurements and skeleton-rigged to enable animation. The costumes were then modeled and constrained according to the construction process observed in the costume studio to achieve the same style and stitch as the real costumes. Scenes selected and recorded from Medea were used as reference to animate the actors\u27 models. The costumes were assigned the parameters derived from the fabric tests to produce the simulations. Finally, the scenes were lit and rendered out to obtain the final videos which were compared to the original recordings to ascertain the accuracy of simulation. By obtaining and refining simulation parameters from simple fabric collision tests, and modeling the digital costumes following the procedures derived from real-life costume construction, realistic costume simulation was achieved

    ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns

    Full text link
    Many approaches to draping individual garments on human body models are realistic, fast, and yield outputs that are differentiable with respect to the body shape on which they are draped. However, they are either unable to handle multi-layered clothing, which is prevalent in everyday dress, or restricted to bodies in T-pose. In this paper, we introduce a parametric garment representation model that addresses these limitations. As in models used by clothing designers, each garment consists of individual 2D panels. Their 2D shape is defined by a Signed Distance Function and 3D shape by a 2D to 3D mapping. The 2D parameterization enables easy detection of potential collisions and the 3D parameterization handles complex shapes effectively. We show that this combination is faster and yields higher quality reconstructions than purely implicit surface representations, and makes the recovery of layered garments from images possible thanks to its differentiability. Furthermore, it supports rapid editing of garment shapes and texture by modifying individual 2D panels.Comment: NeurIPS 202
    • โ€ฆ
    corecore