4 research outputs found

    Woven Fabric Model Creation from a Single Image

    Get PDF
    We present a fast, novel image-based technique, for reverse engineering woven fabrics at a yarn level. These models can be used in a wide range of interior design and visual special effects applications. In order to recover our pseudo-BTF, we estimate the 3D structure and a set of yarn parameters (e.g. yarn width, yarn crossovers) from spatial and frequency domain cues. Drawing inspiration from previous work [Zhao et al. 2012], we solve for the woven fabric pattern, and from this build data set. In contrast however, we use a combination of image space analysis, frequency domain analysis and in challenging cases match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, captured with a DSLR camera under controlled uniform lighting, the woven cloth structure, depth and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results.Our pipeline first estimates the weave pattern, yarn characteristics and noise statistics using a novel combination of low level image processing and Fourier Analysis. Next, we estimate a 3D structure for the fabric sample us- ing a first order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width and hence the volume occupied by the yarns, and colors.We demonstrate the efficacy of our approach through comparison images of test scenes rendered using: (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern and (d) the rendered result.<br/

    Woven fabric model creation from a single image

    Get PDF
    We present a fast, novel image-based technique for reverse engineering woven fabrics at a yarn level. These models can be used in a wide range of interior design and visual special effects applications. To recover our pseudo-Bidirectional Texture Function (BTF), we estimate the three-dimensional (3D) structure and a set of yarn parameters (e.g., yarnwidth, yarn crossovers) from spatial and frequency domain cues. Drawing inspiration from previous work [Zhao et al. 2012], we solve for the woven fabric pattern and from this build a dataset. In contrast, however, we use a combination of image space analysis and frequency domain analysis, and, in challenging cases, match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, captured with a digital single-lens reflex (DSLR) camera under controlled uniform lighting, thewoven cloth structure, depth, and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results. Our pipeline first estimates the weave pattern, yarn characteristics, and noise statistics using a novel combination of low-level image processing and Fourier analysis. Next, we estimate a 3D structure for the fabric sample using a first-order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width, and hence the volume occupied by the yarns, and colors. We demonstrate the efficacy of our approach through comparison images of test scenes rendered using (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern, and (d) the rendered result

    Turning a Digital Camera into an Absolute 2D Tele-Colorimeter

    Get PDF
    We present a simple and effective technique for absolute colorimetric camera characterization, invariant to changes in exposure/aperture and scene irradiance, suitable in a wide range of applications including image-based reflectance measurements, spectral pre-filtering and spectral upsampling for rendering, to improve colour accuracy in high dynamic range imaging. Our method requires a limited number of acquisitions, an off-the-shelf target and a commonly available projector, used as a controllable light source, other than the reflected radiance to be known. The characterized camera can be effectively used as a 2D tele-colorimeter, providing the user with an accurate estimate of the distribution of luminance and chromaticity in a scene, without requiring explicit knowledge of the incident lighting power spectra. We validate the approach by comparing our estimated absolute tristimulus values (XYZ data in cd/m 2 ) with the measurements of a professional 2D tele-colorimeter, for a set of scenes with complex geometry, spatially varying reflectance and light sources with very different spectral power distribution

    BxDF material acquisition, representation, and rendering for VR and design

    Get PDF
    Photorealistic and physically-based rendering of real-world environments with high fidelity materials is important to a range of applications, including special effects, architectural modelling, cultural heritage, computer games, automotive design, and virtual reality (VR). Our perception of the world depends on lighting and surface material characteristics, which determine how the light is reflected, scattered, and absorbed. In order to reproduce appearance, we must therefore understand all the ways objects interact with light, and the acquisition and representation of materials has thus been an important part of computer graphics from early days. Nevertheless, no material model nor acquisition setup is without limitations in terms of the variety of materials represented, and different approaches vary widely in terms of compatibility and ease of use. In this course, we describe the state of the art in material appearance acquisition and modelling, ranging from mathematical BSDFs to data-driven capture and representation of anisotropic materials, and volumetric/thread models for patterned fabrics. We further address the problem of material appearance constancy across different rendering platforms. We present two case studies in architectural and interior design. The first study demonstrates Yulio, a new platform for the creation, delivery, and visualization of acquired material models and reverse engineered cloth models in immersive VR experiences. The second study shows an end-to-end process of capture and data-driven BSDF representation using the physically-based Radiance system for lighting simulation and rendering
    corecore