58 research outputs found

    Virtual Garments: A Fully Geometric Approach for Clothing Design

    Get PDF
    International audienceModeling dressed characters is known as a very tedious process. It usually requires specifying 2D fabric patterns, positioning and assembling them in 3D, and then performing a physically-based simulation. The latter accounts for gravity and collisions to compute the rest shape of the garment, with the adequate folds and wrinkles. This paper presents a more intuitive way to design virtual clothing. We start with a 2D sketching system in which the user draws the contours and seam-lines of the garment directly on a virtual mannequin. Our system then converts the sketch into an initial 3D surface using an existing method based on a precomputed distance field around the mannequin. The system then splits the created surface into different panels delimited by the seam-lines. The generated panels are typically not developable. However, the panels of a realistic garment must be developable, since each panel must unfold into a 2D sewing pattern. Therefore our system automatically approximates each panel with a developable surface, while keeping them assembled along the seams. This process allows us to output the corresponding sewing patterns. The last step of our method computes a natural rest shape for the 3D garment, including the folds due to the collisions with the body and gravity. The folds are generated using procedural modeling of the buckling phenomena observed in real fabric. The result of our algorithm consists of a realistic looking 3D mannequin dressed in the designed garment and the 2D patterns which can be used for distortion free texture mapping. The patterns we create also allow us to sew real replicas of the virtual garments

    The HuMAnS toolbox, a homogenous framework for motion capture, analysis and simulation

    Get PDF
    International audiencePrimarily developed for research needs in humanoid robotics, the HuMAnS toolbox (for Humanoid Motion Analysis and Simulation) also includes a biomechanical model of a complete human body, and proposes a set of versatile tools for the modeling, the capture, the analysis and the simulation of human and humanoid motion. This set of tools is organized as a homogenous framework built on top of the numerical facilities of Scilab, a free generic scientific package, so as to allow genericity and versatility of use, in the hope to enable a new dialogue between direct and inverse dynamics, motion capture and simulation, all of this in a rich scientific software environment. Noticeably, this toolbox is an open-source software, distributed under the GPL License

    4DHumanOutfit: a multi-subject 4D dataset of human motion sequences in varying outfits exhibiting large displacements

    Full text link
    This work presents 4DHumanOutfit, a new dataset of densely sampled spatio-temporal 4D human motion data of different actors, outfits and motions. The dataset is designed to contain different actors wearing different outfits while performing different motions in each outfit. In this way, the dataset can be seen as a cube of data containing 4D motion sequences along 3 axes with identity, outfit and motion. This rich dataset has numerous potential applications for the processing and creation of digital humans, e.g. augmented reality, avatar creation and virtual try on. 4DHumanOutfit is released for research purposes at https://kinovis.inria.fr/4dhumanoutfit/. In addition to image data and 4D reconstructions, the dataset includes reference solutions for each axis. We present independent baselines along each axis that demonstrate the value of these reference solutions for evaluation tasks

    Figurines, a multimodal framework for tangible storytelling

    Get PDF
    Author versionInternational audienceThis paper presents Figurines, an offline framework for narrative creation with tangible objects, designed to record storytelling sessions with children, teenagers or adults. This framework uses tangible diegetic objects to record a free narrative from up to two storytellers and construct a fully annotated representation of the story. This representation is composed of the 3D position and orientation of the figurines, the position of decor elements and interpretation of the storytellers' actions (facial expression, gestures and voice). While maintaining the playful dimension of the storytelling session, the system must tackle the challenge of recovering the free-form motion of the figurines and the storytellers in uncontrolled environments. To do so, we record the storytelling session using a hybrid setup with two RGB-D sensors and figurines augmented with IMU sensors. The first RGB-D sensor completes IMU information in order to identify figurines and tracks them as well as decor elements. It also tracks the storytellers jointly with the second RGB-D sensor. The framework has been used to record preliminary experiments to validate interest of our approach. These experiments evaluate figurine following and combination of motion and storyteller's voice, gesture and facial expressions. In a make-believe game, this story representation was re-targeted on virtual characters to produce an animated version of the story. The final goal of the Figurines framework is to enhance our understanding of the creative processes at work during immersive storytelling

    A system for creating virtual reality content from make-believe games

    Get PDF
    International audiencePretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation

    Modeling of Facial Wrinkles for Applications in Computer Vision

    Get PDF
    International audienceAnalysis and modeling of aging human faces have been extensively studied in the past decade for applications in computer vision such as age estimation, age progression and face recognition across aging. Most of this research work is based on facial appearance and facial features such as face shape, geometry, location of landmarks and patch-based texture features. Despite the recent availability of higher resolution, high quality facial images, we do not find much work on the image analysis of local facial features such as wrinkles specifically. For the most part, modeling of facial skin texture, fine lines and wrinkles has been a focus in computer graphics research for photo-realistic rendering applications. In computer vision, very few aging related applications focus on such facial features. Where several survey papers can be found on facial aging analysis in computer vision, this chapter focuses specifically on the analysis of facial wrinkles in the context of several applications. Facial wrinkles can be categorized as subtle discontinuities or cracks in surrounding inhomogeneous skin texture and pose challenges to being detected/localized in images. First, we review commonly used image features to capture the intensity gradients caused by facial wrinkles and then present research in modeling and analysis of facial wrinkles as aging texture or curvilinear objects for different applications. The reviewed applications include localization or detection of wrinkles in facial images , incorporation of wrinkles for more realistic age progression, analysis for age estimation and inpainting/removal of wrinkles for facial retouching

    Two-phase local heat transfer correlations for non-ozone depleting refrigerant-oil mixtures

    No full text
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN026874 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
    • …
    corecore