15 research outputs found

    BRDF Representation and Acquisition

    Get PDF
    Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects, Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design. Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real-world materials. In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the way a surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact these characteristics have on material appearance. In order to re-produce this, it is necessary to understand how materials interact with light. Thus the representation and acquisition of material models has become such an active research area. This survey of the state-of-the-art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Reflectance Distribution Function) models used to represent surface/material reflection characteristics, and describes current acquisition methods for the capture and rendering of photorealistic materials

    BRDF representation and acquisition

    Get PDF
    Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects, Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design. Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real-world materials. In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the way a surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact these characteristics have on material appearance. In order to re-produce this, it is necessary to understand how materials interact with light. Thus the representation and acquisition of material models has become such an active research area. This survey of the state-of-the-art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Reflectance Distribution Function) models used to represent surface/material reflection characteristics, and describes current acquisition methods for the capture and rendering of photorealistic materials

    BxDF material acquisition, representation, and rendering for VR and design

    Get PDF
    Photorealistic and physically-based rendering of real-world environments with high fidelity materials is important to a range of applications, including special effects, architectural modelling, cultural heritage, computer games, automotive design, and virtual reality (VR). Our perception of the world depends on lighting and surface material characteristics, which determine how the light is reflected, scattered, and absorbed. In order to reproduce appearance, we must therefore understand all the ways objects interact with light, and the acquisition and representation of materials has thus been an important part of computer graphics from early days. Nevertheless, no material model nor acquisition setup is without limitations in terms of the variety of materials represented, and different approaches vary widely in terms of compatibility and ease of use. In this course, we describe the state of the art in material appearance acquisition and modelling, ranging from mathematical BSDFs to data-driven capture and representation of anisotropic materials, and volumetric/thread models for patterned fabrics. We further address the problem of material appearance constancy across different rendering platforms. We present two case studies in architectural and interior design. The first study demonstrates Yulio, a new platform for the creation, delivery, and visualization of acquired material models and reverse engineered cloth models in immersive VR experiences. The second study shows an end-to-end process of capture and data-driven BSDF representation using the physically-based Radiance system for lighting simulation and rendering

    Single view reflectance capture using multiplexed scattering and time-of-flight imaging

    Get PDF
    This paper introduces the concept of time-of-flight reflectance estimation, and demonstrates a new technique that allows a camera to rapidly acquire reflectance properties of objects from a single view-point, over relatively long distances and without encircling equipment. We measure material properties by indirectly illuminating an object by a laser source, and observing its reflected light indirectly using a time-of-flight camera. The configuration collectively acquires dense angular, but low spatial sampling, within a limited solid angle range - all from a single viewpoint. Our ultra-fast imaging approach captures space-time "streak images" that can separate out different bounces of light based on path length. Entanglements arise in the streak images mixing signals from multiple paths if they have the same total path length. We show how reflectances can be recovered by solving for a linear system of equations and assuming parametric material models; fitting to lower dimensional reflectance models enables us to disentangle measurements. We demonstrate proof-of-concept results of parametric reflectance models for homogeneous and discretized heterogeneous patches, both using simulation and experimental hardware. As compared to lengthy or highly calibrated BRDF acquisition techniques, we demonstrate a device that can rapidly, on the order of seconds, capture meaningful reflectance information. We expect hardware advances to improve the portability and speed of this device.National Science Foundation (U.S.) (Award CCF-0644175)National Science Foundation (U.S.) (Award CCF-0811680)National Science Foundation (U.S.) (Award IIS-1011919)Intel Corporation (PhD Fellowship)Alfred P. Sloan Foundation (Research Fellowship

    Material Recognition Meets 3D Reconstruction : Novel Tools for Efficient, Automatic Acquisition Systems

    Get PDF
    For decades, the accurate acquisition of geometry and reflectance properties has represented one of the major objectives in computer vision and computer graphics with many applications in industry, entertainment and cultural heritage. Reproducing even the finest details of surface geometry and surface reflectance has become a ubiquitous prerequisite in visual prototyping, advertisement or digital preservation of objects. However, today's acquisition methods are typically designed for only a rather small range of material types. Furthermore, there is still a lack of accurate reconstruction methods for objects with a more complex surface reflectance behavior beyond diffuse reflectance. In addition to accurate acquisition techniques, the demand for creating large quantities of digital contents also pushes the focus towards fully automatic and highly efficient solutions that allow for masses of objects to be acquired as fast as possible. This thesis is dedicated to the investigation of basic components that allow an efficient, automatic acquisition process. We argue that such an efficient, automatic acquisition can be realized when material recognition "meets" 3D reconstruction and we will demonstrate that reliably recognizing the materials of the considered object allows a more efficient geometry acquisition. Therefore, the main objectives of this thesis are given by the development of novel, robust geometry acquisition techniques for surface materials beyond diffuse surface reflectance, and the development of novel, robust techniques for material recognition. In the context of 3D geometry acquisition, we introduce an improvement of structured light systems, which are capable of robustly acquiring objects ranging from diffuse surface reflectance to even specular surface reflectance with a sufficient diffuse component. We demonstrate that the resolution of the reconstruction can be increased significantly for multi-camera, multi-projector structured light systems by using overlappings of patterns that have been projected under different projector poses. As the reconstructions obtained by applying such triangulation-based techniques still contain high-frequency noise due to inaccurately localized correspondences established for images acquired under different viewpoints, we furthermore introduce a novel geometry acquisition technique that complements the structured light system with additional photometric normals and results in significantly more accurate reconstructions. In addition, we also present a novel method to acquire the 3D shape of mirroring objects with complex surface geometry. The aforementioned investigations on 3D reconstruction are accompanied by the development of novel tools for reliable material recognition which can be used in an initial step to recognize the present surface materials and, hence, to efficiently select the subsequently applied appropriate acquisition techniques based on these classified materials. In the scope of this thesis, we therefore focus on material recognition for scenarios with controlled illumination as given in lab environments as well as scenarios with natural illumination that are given in photographs of typical daily life scenes. Finally, based on the techniques developed in this thesis, we provide novel concepts towards efficient, automatic acquisition systems

    Practical surface light fields

    Get PDF
    The rendering of photorealistic surface appearance is one of the main challenges facing modern computer graphics. Image-based approaches have become increasingly important because they can capture the appearance of a wide variety of physical surfaces with complex reflectance behavior. In this dissertation, I focus on surface light fields, an image-based representation of view-dependent and spatially-varying appearance. Constructing a surface light field can be a time-consuming and tedious process. The data sizes are quite large, often requiring multiple gigabytes to represent complex reflectance properties. The result can only be viewed after a lengthy post-process is complete, so it can be difficult to determine when the light field is sufficiently sampled. Often, uncertainty about the sampling density leads users to capture many more images than necessary in order to guarantee adequate coverage. To address these problems, I present several approaches to simplify the capture of surface light fields. The first is a “human-in-the-loop” interactive feedback system based on the online svd. As each image is captured, it is incorporated into the representation in a streaming fashion and displayed to the user. In this way, the user receives direct feedback about the capture process, and can use this feedback to improve the sampling. To avoid the problems of discretization and resampling, I used incremental weighted least squares, a subset of radial basis function which allows for incremental local construction and fast rendering on graphics hardware. Lastly, I address the limitation of fixed lighting by describing a system that captures the surface light field of an object under synthetic lighting

    Compressive Higher-order Sparse and Low-Rank Acquisition with a Hyperspectral Light Stage

    Get PDF
    Compressive sparse and low-rank recovery (CSLR) is a novel method for compressed sensing deriving a low-rank and a sparse data terms from randomized projection measurements. While previous approaches either applied compressive measurements to phenomena assumed to be sparse or explicitly assume and measure low-rank approximations, CSLR is inherently robust if any such assumption might be violated. In this paper, we will derive CSLR using Fixed-Point Continuation algorithms, and extend this approach in order to exploit the correlation in high-order dimensions to further reduce the number of captured samples. Though generally applicable, we demonstrate the effectiveness of our approach on data sets captured with a novel hyperspectral light stage that can emit a distinct spectrum from each of the 196 light source directions enabling bispectral measurements of reflectance from arbitrary viewpoints. Bispectral reflectance fields and BTFs are faithfully reconstructed from a small number of compressed measurements

    Image based surface reflectance remapping for consistent and tool independent material appearence

    Get PDF
    Physically-based rendering in Computer Graphics requires the knowledge of material properties other than 3D shapes, textures and colors, in order to solve the rendering equation. A number of material models have been developed, since no model is currently able to reproduce the full range of available materials. Although only few material models have been widely adopted in current rendering systems, the lack of standardisation causes several issues in the 3D modelling workflow, leading to a heavy tool dependency of material appearance. In industry, final decisions about products are often based on a virtual prototype, a crucial step for the production pipeline, usually developed by a collaborations among several departments, which exchange data. Unfortunately, exchanged data often tends to differ from the original, when imported into a different application. As a result, delivering consistent visual results requires time, labour and computational cost. This thesis begins with an examination of the current state of the art in material appearance representation and capture, in order to identify a suitable strategy to tackle material appearance consistency. Automatic solutions to this problem are suggested in this work, accounting for the constraints of real-world scenarios, where the only available information is a reference rendering and the renderer used to obtain it, with no access to the implementation of the shaders. In particular, two image-based frameworks are proposed, working under these constraints. The first one, validated by means of perceptual studies, is aimed to the remapping of BRDF parameters and useful when the parameters used for the reference rendering are available. The second one provides consistent material appearance across different renderers, even when the parameters used for the reference are unknown. It allows the selection of an arbitrary reference rendering tool, and manipulates the output of other renderers in order to be consistent with the reference

    Multibounce light transport analysis using ultrafast imaging for material acquisition

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 92-96).This thesis introduces a novel framework for analysis of multibounce light transport using time-of-flight imaging for the applications of ultrafast reflectance acquisition and imaging through scattering media. Using ultrafast imaging and ultrafast illumination, we analyze light indirectly scattered off materials to provide new insights into the important problem of material acquisition. We use an elegant matrix based representation of light transport, which enables scene reconstruction using standard optimization techniques. We demonstrate the accuracy and efficiency of our methods using various simulations as well as an experimental setup. In particular, we develop the concept of 'in the wild' reflectance estimation using ultrafast imaging. We demonstrate a new technique that allows a camera to rapidly acquire reflectance properties of objects from a single viewpoint, over relatively long distances and without encircling equipment. We measure material properties by indirectly illuminating an object by a laser source, and observing its reflected light indirectly using a time-of-fight camera. As compared to lengthy or highly calibrated reflectance acquisition techniques, we demonstrate a device that can rapidly and simultaneously capture meaningful reflectance information of multiple materials. Furthermore, we use this framework to develop a method for imaging through scattering media using ultrafast imaging. We capture the diffuse scattering in the scene with a time-of- flight camera and analyze the multibounce light transport to recover albedo and depth information of planar objects hidden behind a diffuser. The methods developed in this thesis using ultrafast imaging can spur research with novel real-time applications in computer graphics, medical imaging and industrial photography.by Nikhil Naik.S.M

    Estimating motion, size and material properties of moving non-line-of-sight objects in cluttered environments

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 111-117).The thesis presents a framework for Non-Line-of-Sight Computer Vision techniques using wave fronts. Using short-pulse illumination and a high speed time-of-flight camera, we propose algorithms that use multi path light transport analysis to explore the environments beyond line of sight. What is moving around the corner interests everyone including a driver taking a turn, a surgeon performing laparoscopy and a soldier entering enemy base. State of the art techniques that do range imaging are limited by (i) inability to handle multiple diffused bounces [LIDAR] (ii) Wavelength dependent resolution limits [RADAR] and (iii) inability to map real life objects [Diffused Optical Tomography]. This work presents a framework for (a) Imaging the changing Space-time-impulse-responses of moving objects to pulsed illumination (b) Tracking motion along with absolute positions of these hidden objects and (c) recognizing their default properties like material and size and reflectance. We capture gated space-time impulse responses of the scene and their time differentials allow us to gauge absolute positions of moving objects with knowledge of only relative times of arrival (as absolute times are hard to synchronize at femto second intervals). Since we record responses at very short time intervals we collect multiple readings from different points of illumination and thus capturing multi-perspective responses allowing us to estimate reflectance properties. Using this, we categorize and give parametric models of the materials around corner. We hope this work inspires further exploration of NLOS computer vision techniques.by Rohit Pandharkar.S.M
    corecore