7,182 research outputs found

    Mechanics-Aware Modeling of Cloth Appearance

    Get PDF

    Modelling and Visualisation of the Optical Properties of Cloth

    Get PDF
    Cloth and garment visualisations are widely used in fashion and interior design, entertaining, automotive and nautical industry and are indispensable elements of visual communication. Modern appearance models attempt to offer a complete solution for the visualisation of complex cloth properties. In the review part of the chapter, advanced methods that enable visualisation at micron resolution, methods used in three-dimensional (3D) visualisation workflow and methods used for research purposes are presented. Within the review, those methods offering a comprehensive approach and experiments on explicit clothes attributes that present specific optical phenomenon are analysed. The review of appearance models includes surface and image-based models, volumetric and explicit models. Each group is presented with the representative authors’ research group and the application and limitations of the methods. In the final part of the chapter, the visualisation of cloth specularity and porosity with an uneven surface is studied. The study and visualisation was performed using image data obtained with photography. The acquisition of structure information on a large scale namely enables the recording of structure irregularities that are very common on historical textiles, laces and also on artistic and experimental pieces of cloth. The contribution ends with the presentation of cloth visualised with the use of specular and alpha maps, which is the result of the image processing workflow

    Woven Fabric Model Creation from a Single Image

    Get PDF
    We present a fast, novel image-based technique, for reverse engineering woven fabrics at a yarn level. These models can be used in a wide range of interior design and visual special effects applications. In order to recover our pseudo-BTF, we estimate the 3D structure and a set of yarn parameters (e.g. yarn width, yarn crossovers) from spatial and frequency domain cues. Drawing inspiration from previous work [Zhao et al. 2012], we solve for the woven fabric pattern, and from this build data set. In contrast however, we use a combination of image space analysis, frequency domain analysis and in challenging cases match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, captured with a DSLR camera under controlled uniform lighting, the woven cloth structure, depth and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results.Our pipeline first estimates the weave pattern, yarn characteristics and noise statistics using a novel combination of low level image processing and Fourier Analysis. Next, we estimate a 3D structure for the fabric sample us- ing a first order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width and hence the volume occupied by the yarns, and colors.We demonstrate the efficacy of our approach through comparison images of test scenes rendered using: (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern and (d) the rendered result.<br/

    A Multi-scale Yarn Appearance Model with Fiber Details

    Full text link
    Rendering realistic cloth has always been a challenge due to its intricate structure. Cloth is made up of fibers, plies, and yarns, and previous curved-based models, while detailed, were computationally expensive and inflexible for large cloth. To address this, we propose a simplified approach. We introduce a geometric aggregation technique that reduces ray-tracing computation by using fewer curves, focusing only on yarn curves. Our model generates ply and fiber shapes implicitly, compensating for the lack of explicit geometry with a novel shadowing component. We also present a shading model that simplifies light interactions among fibers by categorizing them into four components, accurately capturing specular and scattered light in both forward and backward directions. To render large cloth efficiently, we propose a multi-scale solution based on pixel coverage. Our yarn shading model outperforms previous methods, achieving rendering speeds 3-5 times faster with less memory in near-field views. Additionally, our multi-scale solution offers a 20% speed boost for distant cloth observation

    RECREATING AND SIMULATING DIGITAL COSTUMES FROM A STAGE PRODUCTION OF \u3ci\u3eMEDEA\u3c/i\u3e

    Get PDF
    This thesis investigates a technique to effectively construct and simulate costumes from a stage production Medea, in a dynamic cloth simulation application like Maya\u27s nDynamics. This was done by using data collected from real-world fabric tests and costume construction in the theatre\u27s costume studio. Fabric tests were conducted and recorded, by testing costume fabrics for drape and behavior with two collision objects. These tests were recreated digitally in Maya to derive appropriate parameters for the digital fabric, by comparing with the original reference. Basic mannequin models were created using the actors\u27 measurements and skeleton-rigged to enable animation. The costumes were then modeled and constrained according to the construction process observed in the costume studio to achieve the same style and stitch as the real costumes. Scenes selected and recorded from Medea were used as reference to animate the actors\u27 models. The costumes were assigned the parameters derived from the fabric tests to produce the simulations. Finally, the scenes were lit and rendered out to obtain the final videos which were compared to the original recordings to ascertain the accuracy of simulation. By obtaining and refining simulation parameters from simple fabric collision tests, and modeling the digital costumes following the procedures derived from real-life costume construction, realistic costume simulation was achieved
    corecore