16 research outputs found

    Measurement and rendering of complex non-diffuse and goniochromatic packaging materials

    Get PDF
    Realistic renderings of materials with complex optical properties, such as goniochromatism and non-diffuse reflection, are difficult to achieve. In the context of the print and packaging industries, accurate visualisation of the complex appearance of such materials is a challenge, both for communication and quality control. In this paper, we characterise the bidirectional reflectance of two homogeneous print samples displaying complex optical properties. We demonstrate that in-plane retro-reflective measurements from a single input photograph, along with genetic algorithm-based BRDF fitting, allow to estimate an optimal set of parameters for reflectance models, to use for rendering. While such a minimal set of measurements enables visually satisfactory renderings of the measured materials, we show that a few additional photographs lead to more accurate results, in particular, for samples with goniochromatic appearance

    Woven fabric model creation from a single image

    Get PDF
    We present a fast, novel image-based technique for reverse engineering woven fabrics at a yarn level. These models can be used in a wide range of interior design and visual special effects applications. To recover our pseudo-Bidirectional Texture Function (BTF), we estimate the three-dimensional (3D) structure and a set of yarn parameters (e.g., yarnwidth, yarn crossovers) from spatial and frequency domain cues. Drawing inspiration from previous work [Zhao et al. 2012], we solve for the woven fabric pattern and from this build a dataset. In contrast, however, we use a combination of image space analysis and frequency domain analysis, and, in challenging cases, match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, captured with a digital single-lens reflex (DSLR) camera under controlled uniform lighting, thewoven cloth structure, depth, and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results. Our pipeline first estimates the weave pattern, yarn characteristics, and noise statistics using a novel combination of low-level image processing and Fourier analysis. Next, we estimate a 3D structure for the fabric sample using a first-order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width, and hence the volume occupied by the yarns, and colors. We demonstrate the efficacy of our approach through comparison images of test scenes rendered using (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern, and (d) the rendered result

    Perceptually Validated Cross-Renderer Analytical BRDF Parameter Remapping

    Get PDF
    Material appearance of rendered objects depends on the underlying BRDF implementation used by rendering software packages. A lack of standards to exchange material parameters and data (between tools) means that artists in digital 3D prototyping and design, manually match the appearance of materials to a reference image. Since their effect on rendered output is often non-uniform and counter intuitive, selecting appropriate parameterisations for BRDF models is far from straightforward. We present a novel BRDF remapping technique, that automatically computes a mapping (BRDF Difference Probe) to match the appearance of a source material model to a target one. Through quantitative analysis, four user studies and psychometric scaling experiments, we validate our remapping framework and demonstrate that it yields a visually faithful remapping among analytical BRDFs. Most notably, our results show that even when the characteristics of the models are substantially different, such as in the case of a phenomenological model and a physically-based one, our remapped renderings are indistinguishable from the original source model

    Practical Measurement and Reconstruction of Spectral Skin Reflectance

    Get PDF
    We present two practical methods for measurement of spectral skin reflectance suited for live subjects, and drive a spectral BSSRDF model with appropriate complexity to match skin appearance in photographs, including human faces. Our primary measurement method employs illuminating a subject with two complementary uniform spectral illumination conditions using a multispectral LED sphere to estimate spatially varying parameters of chromophore concentrations including melanin and hemoglobin concentration, melanin blend-type fraction, and epidermal hemoglobin fraction. We demonstrate that our proposed complementary measurements enable higher-quality estimate of chromophores than those obtained using standard broadband illumination, while being suitable for integration with multiview facial capture using regular color cameras. Besides novel optimal measurements under controlled illumination, we also demonstrate how to adapt practical skin patch measurements using a hand-held dermatological skin measurement device, a Miravex Antera 3D camera, for skin appearance reconstruction and rendering. Furthermore, we introduce a novel approach for parameter estimation given the measurements using neural networks which is significantly faster than a lookup table search and avoids parameter quantization. We demonstrate high quality matches of skin appearance with photographs for a variety of skin types with our proposed practical measurement procedures, including photorealistic spectral reproduction and renderings of facial appearance

    Objective estimation of body condition score by modeling cow body shape from digital images.

    Get PDF
    Body condition score (BCS) is considered an important tool for management of dairy cattle. The feasibility of estimating the BCS from digital images has been demonstrated in recent work. Regression machines have been successfully employed for automatic BCS estimation, taking into account information of the overall shape or information extracted on anatomical points of the shape. Despite the progress in this research area, such studies have not addressed the problem of modeling the shape of cows to build a robust descriptor for automatic BCS estimation. Moreover, a benchmark data set of images meant as a point of reference for quantitative evaluation and comparison of different automatic estimation methods for BCS is lacking. The main objective of this study was to develop a technique that was able to describe the body shape of cows in a reconstructive way. Images, used to build a benchmark data set for developing an automatic system for BCS, were taken using a camera placed above an exit gate from the milking robot. The camera was positioned at 3 m from the ground and in such a position to capture images of the rear, dorsal pelvic, and loin area of cows. The BCS of each cow was estimated on site by 2 technicians and associated to the cow images. The benchmark data set contained 286 images with associated BCS, anatomical points, and shapes. It was used for quantitative evaluation. A set of example cow body shapes was created. Linear and polynomial kernel principal component analysis was used to reconstruct shapes of cows using a linear combination of basic shapes constructed from the example database. In this manner, a cow's body shape was described by considering her variability from the average shape. The method produced a compact description of the shape to be used for automatic estimation of BCS. Model validation showed that the polynomial model proposed in this study performs better (error=0.31) than other state-of-the-art methods in estimating BCS even at the extreme values of BCS scale

    Towards a Consistent, Tool Independent Virtual Material Appearance

    No full text
    Current materials appearance is mainly tool dependent and requires time, labour and computational cost to deliver consistent visual result. Within the industry, the development of a project is often based on a virtual model, which is usually developed by means of a collaboration among several departments, which exchange data. Unfortunately, a virtual material in most cases does not appear the same as the original once imported in a different renderer due to different algorithms and settings. The aim of this research is to provide artists with a general solution, applicable regardless the file format and the software used, thus allowing them to uniform the output of the renderer they use with a reference application, arbitrarily selected within an industry, to which all the renderings obtained with other software will be made visually uniform. We propose to characterize the appearance of several classes of materials rendered using the arbitrary reference software by extracting relevant visual characteristics. By repeating the same process for any other renderer we are able to derive ad-hoc mapping functions between the two renderers. Our approach allows us to hallucinate the appearance of a scene, depicting mainly the selected classes of materials, under the reference software

    Towards a consistent, tool independent virtual material appearance

    No full text
    Current materials appearance is mainly tool dependent and requires time, labour and computational cost to deliver consistent visual result. Within the industry, the development of a project is often based on a virtual model, which is usually developed by means of a collaboration among several departments, which exchange data. Unfortunately, a virtual material in most cases does not appear the same as the original once imported in a different renderer due to different algorithms and settings. The aim of this research is to provide artists with a general solution, applicable regardless the file format and the software used, thus allowing them to uniform the output of the renderer they use with a reference application, arbitrarily selected within an industry, to which all the renderings obtained with other software will be made visually uniform. We propose to characterize the appearance of several classes of materials rendered using the arbitrary reference software by extracting relevant visual characteristics. By repeating the same process for any other renderer we are able to derive ad-hoc mapping functions between the two renderers. Our approach allows us to hallucinate the appearance of a scene, depicting mainly the selected classes of materials, under the reference software

    DIY absolute tele-colorimeter using a camera-projector system

    No full text
    Image-based reflectance measurement setups lower costs and increase the speed of reflectance acquisition. Unfortunately, consumer camera sensors are designed to produce aesthetically pleasing images, rather than faithfully capture the colors of a scene. We present a novel approach for colorimetric camera characterization, which exploits a commonly available projector as controllable light source, and accurately relates the camera sensor response to the known reflected radiance. The characterized camera can be effectively used as a 2D tele-colorimeter, suitable for image-based reflectance measurements, spectral prefiltering and spectral up-sampling for rendering, and to improve color accuracy in HDR imaging. We demonstrate our method in the context of radiometric compensation. Coupled with a gamut-mapping technique, it allows to seamlessly project images on almost any surface, including non-flat, colored or even textured ones.acceptedVersion© ACM, 2018. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions of Computing Education, https://doi.org/10.1145/3214745.321477

    DIY absolute tele-colorimeter using a camera-projector system

    No full text
    Image-based reflectance measurement setups lower costs and increase the speed of reflectance acquisition. Unfortunately, consumer camera sensors are designed to produce aesthetically pleasing images, rather than faithfully capture the colors of a scene. We present a novel approach for colorimetric camera characterization, which exploits a commonly available projector as controllable light source, and accurately relates the camera sensor response to the known reflected radiance. The characterized camera can be effectively used as a 2D tele-colorimeter, suitable for image-based reflectance measurements, spectral prefiltering and spectral up-sampling for rendering, and to improve color accuracy in HDR imaging. We demonstrate our method in the context of radiometric compensation. Coupled with a gamut-mapping technique, it allows to seamlessly project images on almost any surface, including non-flat, colored or even textured ones
    corecore