16 research outputs found

    Multispectral RTI Analysis of Heterogeneous Artworks

    Get PDF
    We propose a novel multi-spectral reflectance transformation imaging (MS-RTI) framework for the acquisition and direct analysis of the reflectance behavior of heterogeneous artworks. Starting from free-form acquisitions, we compute per-pixel calibrated multi-spectral appearance profiles, which associate a reflectance value to each sampled light direction and frequency. Visualization, relighting, and feature extraction is performed directly on appearance profile data, applying scattered data interpolation based on Radial Basis Functions to estimate per-pixel reflectance from novel lighting directions. We demonstrate how the proposed solution can convey more insights on the object materials and geometric details compared to classical multi-light methods that rely on low-frequency analytical model fitting eventually mixed with a separate handling of high-frequency components, hence requiring constraining priors on material behavior. The flexibility of our approach is illustrated on two heterogeneous case studies, a painting and a dark shiny metallic sculpture, that showcase feature extraction, visualization, and analysis of high-frequency properties of artworks using multi-light, multi-spectral (Visible, UV and IR) acquisitions.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091the DSURF (PRIN 2015) project funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa

    Enhanced dynamic reflectometry for relightable free-viewpoint video

    No full text
    Free-Viewpoint Video of Human Actors allows photo- realistic rendering of real-world people under novel viewing conditions. Dynamic Reflectometry extends the concept of free-view point video and allows rendering in addition under novel lighting conditions. In this work, we present an enhanced method for capturing human shape and motion as well as dynamic surface reflectance properties from a sparse set of input video streams. We augment our initial method for model-based relightable free-viewpoint video in several ways. Firstly, a single-skin mesh is introduced for the continuous appearance of the model. Moreover an algorithm to detect and compensate lateral shifting of textiles in order to improve temporal texture registration is presented. Finally, a structured resampling approach is introduced which enables reliable estimation of spatially varying surface reflectance despite a static recording setup. The new algorithm ingredients along with the Relightable 3D Video framework enables us to realistically reproduce the appearance of animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution can be used to create 3D renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware

    Multibounce light transport analysis using ultrafast imaging for material acquisition

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 92-96).This thesis introduces a novel framework for analysis of multibounce light transport using time-of-flight imaging for the applications of ultrafast reflectance acquisition and imaging through scattering media. Using ultrafast imaging and ultrafast illumination, we analyze light indirectly scattered off materials to provide new insights into the important problem of material acquisition. We use an elegant matrix based representation of light transport, which enables scene reconstruction using standard optimization techniques. We demonstrate the accuracy and efficiency of our methods using various simulations as well as an experimental setup. In particular, we develop the concept of 'in the wild' reflectance estimation using ultrafast imaging. We demonstrate a new technique that allows a camera to rapidly acquire reflectance properties of objects from a single viewpoint, over relatively long distances and without encircling equipment. We measure material properties by indirectly illuminating an object by a laser source, and observing its reflected light indirectly using a time-of-fight camera. As compared to lengthy or highly calibrated reflectance acquisition techniques, we demonstrate a device that can rapidly and simultaneously capture meaningful reflectance information of multiple materials. Furthermore, we use this framework to develop a method for imaging through scattering media using ultrafast imaging. We capture the diffuse scattering in the scene with a time-of- flight camera and analyze the multibounce light transport to recover albedo and depth information of planar objects hidden behind a diffuser. The methods developed in this thesis using ultrafast imaging can spur research with novel real-time applications in computer graphics, medical imaging and industrial photography.by Nikhil Naik.S.M

    Estimating motion, size and material properties of moving non-line-of-sight objects in cluttered environments

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 111-117).The thesis presents a framework for Non-Line-of-Sight Computer Vision techniques using wave fronts. Using short-pulse illumination and a high speed time-of-flight camera, we propose algorithms that use multi path light transport analysis to explore the environments beyond line of sight. What is moving around the corner interests everyone including a driver taking a turn, a surgeon performing laparoscopy and a soldier entering enemy base. State of the art techniques that do range imaging are limited by (i) inability to handle multiple diffused bounces [LIDAR] (ii) Wavelength dependent resolution limits [RADAR] and (iii) inability to map real life objects [Diffused Optical Tomography]. This work presents a framework for (a) Imaging the changing Space-time-impulse-responses of moving objects to pulsed illumination (b) Tracking motion along with absolute positions of these hidden objects and (c) recognizing their default properties like material and size and reflectance. We capture gated space-time impulse responses of the scene and their time differentials allow us to gauge absolute positions of moving objects with knowledge of only relative times of arrival (as absolute times are hard to synchronize at femto second intervals). Since we record responses at very short time intervals we collect multiple readings from different points of illumination and thus capturing multi-perspective responses allowing us to estimate reflectance properties. Using this, we categorize and give parametric models of the materials around corner. We hope this work inspires further exploration of NLOS computer vision techniques.by Rohit Pandharkar.S.M

    Measuring and simulating haemodynamics due to geometric changes in facial expression

    Get PDF
    The human brain has evolved to be very adept at recognising imperfections in human skin. In particular, observing someone’s facial skin appearance is important in recognising when someone is ill, or when finding a suitable mate. It is therefore a key goal of computer graphics research to produce highly realistic renderings of skin. However, the optical processes that give rise to skin appearance are complex and subtle. To address this, computer graphics research has incorporated more and more sophisticated models of skin reflectance. These models are generally based on static concentrations of skin chromophores; melanin and haemoglobin. However, haemoglobin concentrations are far from static, as blood flow is directly caused by both changes in facial expression and emotional state. In this thesis, we explore how blood flow changes as a consequence of changing facial expression with the aim of producing more accurate models of skin appearance. To build an accurate model of blood flow, we base it on real-world measurements of blood concentrations over time. We describe, in detail, the steps required to obtain blood concentrations from photographs of a subject. These steps are then used to measure blood concentration maps for a series of expressions that define a wide gamut of human expression. From this, we define a blending algorithm that allows us to interpolate these maps to generate concentrations for other expressions. This technique, however, requires specialist equipment to capture the maps in the first place. We try to rectify this problem by investigating a direct link between changes in facial geometry and haemoglobin concentrations. This requires building a unique capture device that captures both simultaneously. Our analysis hints a direct linear connection between the two, paving the way for further investigatio

    Practical surface light fields

    Get PDF
    The rendering of photorealistic surface appearance is one of the main challenges facing modern computer graphics. Image-based approaches have become increasingly important because they can capture the appearance of a wide variety of physical surfaces with complex reflectance behavior. In this dissertation, I focus on surface light fields, an image-based representation of view-dependent and spatially-varying appearance. Constructing a surface light field can be a time-consuming and tedious process. The data sizes are quite large, often requiring multiple gigabytes to represent complex reflectance properties. The result can only be viewed after a lengthy post-process is complete, so it can be difficult to determine when the light field is sufficiently sampled. Often, uncertainty about the sampling density leads users to capture many more images than necessary in order to guarantee adequate coverage. To address these problems, I present several approaches to simplify the capture of surface light fields. The first is a “human-in-the-loop” interactive feedback system based on the online svd. As each image is captured, it is incorporated into the representation in a streaming fashion and displayed to the user. In this way, the user receives direct feedback about the capture process, and can use this feedback to improve the sampling. To avoid the problems of discretization and resampling, I used incremental weighted least squares, a subset of radial basis function which allows for incremental local construction and fast rendering on graphics hardware. Lastly, I address the limitation of fixed lighting by describing a system that captures the surface light field of an object under synthetic lighting
    corecore