4,525 research outputs found

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    No full text
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    Get PDF
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method

    Learning to Dress {3D} People in Generative Clothing

    Get PDF
    Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shapes. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term in SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses. The model, code and data are available for research purposes at https://cape.is.tue.mpg.de.Comment: CVPR-2020 camera ready. Code and data are available at https://cape.is.tue.mpg.d

    A new automated workflow for 3D character creation based on 3D scanned data

    Get PDF
    In this paper we present a new workflow allowing the creation of 3D characters in an automated way that does not require the expertise of an animator. This workflow is based of the acquisition of real human data captured by 3D body scanners, which is them processed to generate firstly animatable body meshes, secondly skinned body meshes and finally textured 3D garments

    Learning to Reconstruct People in Clothing from a Single RGB Camera

    No full text
    We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach

    Block party: contemporary craft inspired by the art of the tailor

    Full text link
    Block Party: contemporary craft inspired by the art of the tailor, is a new touring exhibition from the Crafts Council curated by Lucy Orta - Professor of Art, Fashion and the Environment at London College of Fashion, and renowned visual artist whose own practice fuses fashion, art and architecture. Block Party explores the alchemy of the centuries-old skill of tailoring by presenting work by 15 UK and international artists who push pattern-cutting beyond the fashion garment. The artists Lucy Orta has selected take pattern-cutting as a starting point to produce sculpture, ceramics, textile, moving image and collage. Through experimentation the artists have found new ways to assemble pattern shapes, not to create garments but to manipulate shape to realise new outcomes. Block Party focuses on three themes; Storytelling, Embracing the Future, and Motif and Manipulation. In Storytelling artists use pattern-cutting as a means of expression. Turner Prize-nominated Yinka Shonibare MBE presents a child mannequin, dressed in a historically accurate Victorian outfit crafted from African fabric to reference culture, race and history. Claudia Losi’s 24m whale made of woollen suit fabric was transported around the world to stimulate discussion and storytelling before being deconstructed and transformed into jackets in collaboration with fashion designer Antonio Marras. In Embracing the Future existing pattern-cutting methods are manipulated and challenged through the use of innovative processes and technologies. Simon Thorogood’s patterns are created using digital programmes whilst Philip Delamore of the Fashion Digital Studio at London College of Fashion seeks to apply the latest developments in 3D digital design to the garment making process. In Motif and Manipulation the beauty of the paper pattern block is the visual inspiration. Ceramist Charlotte Hodes directly incorporates these familiar shapes into her ceramics whilst Raw Edges re-appropriate the use of a pattern block by creating a flat paper pattern of a chair which is then filled with expandable foam to create the 3D ‘Tailored Wood Bench’
    • 

    corecore