42 research outputs found
Fab forms: customizable objects for fabrication with validity and geometry caching
We address the problem of allowing casual users to customize parametric models while maintaining their valid state as 3D-printable functional objects. We define Fab Form as any design representation that lends itself to interactive customization by a novice user, while remaining valid and manufacturable. We propose a method to achieve these Fab Form requirements for general parametric designs tagged with a general set of automated validity tests and a small number of parameters exposed to the casual user. Our solution separates Fab Form evaluation into a precomputation stage and a runtime stage. Parts of the geometry and design validity (such as manufacturability) are evaluated and stored in the precomputation stage by adaptively sampling the design space. At runtime the remainder of the evaluation is performed. This allows interactive navigation in the valid regions of the design space using an automatically generated Web user interface (UI). We evaluate our approach by converting several parametric models into corresponding Fab Forms.National Science Foundation (U.S.) (Grant 1138967
A Generative Model of People in Clothing
We present the first image-based generative model of people in clothing for
the full body. We sidestep the commonly used complex graphics rendering
pipeline and the need for high-quality 3D scans of dressed people. Instead, we
learn generative models from a large image database. The main challenge is to
cope with the high variance in human pose, shape and appearance. For this
reason, pure image-based approaches have not been considered so far. We show
that this challenge can be overcome by splitting the generating process in two
parts. First, we learn to generate a semantic segmentation of the body and
clothing. Second, we learn a conditional model on the resulting segments that
creates realistic images. The full model is differentiable and can be
conditioned on pose, shape or color. The result are samples of people in
different clothing items and styles. The proposed model can generate entirely
new people with realistic clothing. In several experiments we present
encouraging results that suggest an entirely data-driven approach to people
generation is possible
Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing
Despite recent progress in developing animatable full-body avatars, realistic
modeling of clothing - one of the core aspects of human self-expression -
remains an open challenge. State-of-the-art physical simulation methods can
generate realistically behaving clothing geometry at interactive rates.
Modeling photorealistic appearance, however, usually requires physically-based
rendering which is too expensive for interactive applications. On the other
hand, data-driven deep appearance models are capable of efficiently producing
realistic appearance, but struggle at synthesizing geometry of highly dynamic
clothing and handling challenging body-clothing configurations. To this end, we
introduce pose-driven avatars with explicit modeling of clothing that exhibit
both photorealistic appearance learned from real-world data and realistic
clothing dynamics. The key idea is to introduce a neural clothing appearance
model that operates on top of explicit geometry: at training time we use
high-fidelity tracking, whereas at animation time we rely on physically
simulated geometry. Our core contribution is a physically-inspired appearance
network, capable of generating photorealistic appearance with view-dependent
and dynamic shadowing effects even for unseen body-clothing configurations. We
conduct a thorough evaluation of our model and demonstrate diverse animation
results on several subjects and different types of clothing. Unlike previous
work on photorealistic full-body avatars, our approach can produce much richer
dynamics and more realistic deformations even for many examples of loose
clothing. We also demonstrate that our formulation naturally allows clothing to
be used with avatars of different people while staying fully animatable, thus
enabling, for the first time, photorealistic avatars with novel clothing.Comment: SIGGRAPH Asia 2022 (ACM ToG) camera ready. The supplementary video
can be found on
https://research.facebook.com/publications/dressing-avatars-deep-photorealistic-appearance-for-physically-simulated-clothing
Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation
Authoring dynamic garment shapes for character animation on body motion is one of the fundamental steps in the CG industry. Established workflows are either time and labor consuming (i.e., manual editing on dense frames with controllers), or lack keyframe-level control (i.e., physically-based simulation). Not surprisingly, garment authoring remains a bottleneck in many production pipelines. Instead, we present a deep-learning-based approach for semi-automatic authoring of garment animation, wherein the user provides the desired garment shape in a selection of keyframes, while our system infers a latent representation for its motion-independent intrinsic parameters (e.g., gravity, cloth materials, etc.). Given new character motions, the latent representation allows to automatically generate a plausible garment animation at interactive rates. Having factored out character motion, the learned intrinsic garment space enables smooth transition between keyframes on a new motion sequence. Technically, we learn an intrinsic garment space with an motion-driven autoencoder network, where the encoder maps the garment shapes to the intrinsic space under the condition of body motions, while the decoder acts as a differentiable simulator to generate garment shapes according to changes in character body motion and intrinsic parameters. We evaluate our approach qualitatively and quantitatively on common garment types. Experiments demonstrate our system can significantly improve current garment authoring workflows via an interactive user interface. Compared with the standard CG pipeline, our system significantly reduces the ratio of required keyframes from 20% to 1 -- 2%
Multi-Garment Net: {L}earning to Dress {3D} People from Images
We present Multi-Garment Network (MGN), a method to predict body shape and clothing, layered on top of the SMPL model from a few frames (1-8) of a video. Several experiments demonstrate that this representation allows higher level of control when compared to single mesh or voxel representations of shape. Our model allows to predict garment geometry, relate it to the body shape, and transfer it to new body shapes and poses. To train MGN, we leverage a digital wardrobe containing 712 digital garments in correspondence, obtained with a novel method to register a set of clothing templates to a dataset of real 3D scans of people in different clothing and poses. Garments from the digital wardrobe, or predicted by MGN, can be used to dress any body shape in arbitrary poses. We will make publicly available the digital wardrobe, the MGN model, and code to dress SMPL with the garments
Unilaterally Incompressible Skinning
Skinning was initially devised for computing the skin of a character deformed through a skeleton; but it is now also commonly used for deforming tight garments at a very cheap cost. However, unlike skin which may easily compress and stretch, tight cloth strongly resists compression: inside bending regions such as the interior of an elbow, cloth does not shrink but instead buckles, causing interesting folds and wrinkles which are completely missed by skinning methods. Our goal is to extend traditional skinning in order to capture such folding patterns automatically, without sacrificing efficiency. The key of our model is to replace the usual skinning formula — derived from, e.g., Linear Blend Skinning or Dual Quaternions — with a complementarity constraint, making an automatic switch between, on the one hand, classical skinning in zones prone to stretching, and on the other hand, a quasi-isometric scheme in zones prone to compression. Moreover, our method provides some useful handles to the user for directing the type of folds created, such as the fold density or the overall shape of a given fold. Our results show that our method can generate similar complexity of folds compared to full cloth simulation, while retaining interactivity of skinning approaches and offering intuitive user control