12 research outputs found

    Self-Supervised Intrinsic Image Decomposition

    Full text link
    Intrinsic decomposition from a single image is a highly challenging task, due to its inherent ambiguity and the scarcity of training data. In contrast to traditional fully supervised learning approaches, in this paper we propose learning intrinsic image decomposition by explaining the input image. Our model, the Rendered Intrinsics Network (RIN), joins together an image decomposition pipeline, which predicts reflectance, shape, and lighting conditions given a single image, with a recombination function, a learned shading model used to recompose the original input based off of intrinsic image predictions. Our network can then use unsupervised reconstruction error as an additional signal to improve its intermediate representations. This allows large-scale unlabeled data to be useful during training, and also enables transferring learned knowledge to images of unseen object categories, lighting conditions, and shapes. Extensive experiments demonstrate that our method performs well on both intrinsic image decomposition and knowledge transfer.Comment: NIPS 2017 camera-ready version, project page: http://rin.csail.mit.edu

    Reflectance Hashing for Material Recognition

    Full text link
    We introduce a novel method for using reflectance to identify materials. Reflectance offers a unique signature of the material but is challenging to measure and use for recognizing materials due to its high-dimensionality. In this work, one-shot reflectance is captured using a unique optical camera measuring {\it reflectance disks} where the pixel coordinates correspond to surface viewing angles. The reflectance has class-specific stucture and angular gradients computed in this reflectance space reveal the material class. These reflectance disks encode discriminative information for efficient and accurate material recognition. We introduce a framework called reflectance hashing that models the reflectance disks with dictionary learning and binary hashing. We demonstrate the effectiveness of reflectance hashing for material recognition with a number of real-world materials

    What Is Around The Camera?

    Get PDF
    How much does a single image reveal about the environment it was taken in? In this paper, we investigate how much of that information can be retrieved from a foreground object, combined with the background (i.e. the visible part of the environment). Assuming it is not perfectly diffuse, the foreground object acts as a complexly shaped and far-from-perfect mirror. An additional challenge is that its appearance confounds the light coming from the environment with the unknown materials it is made of. We propose a learning-based approach to predict the environment from multiple reflectance maps that are computed from approximate surface normals. The proposed method allows us to jointly model the statistics of environments and material properties. We train our system from synthesized training data, but demonstrate its applicability to real-world data. Interestingly, our analysis shows that the information obtained from objects made out of multiple materials often is complementary and leads to better performance.Comment: Accepted to ICCV. Project: http://homes.esat.kuleuven.be/~sgeorgou/multinatillum

    A 4D Light-Field Dataset and CNN Architectures for Material Recognition

    Full text link
    We introduce a new light-field dataset of materials, and take advantage of the recent success of deep learning to perform material recognition on the 4D light-field. Our dataset contains 12 material categories, each with 100 images taken with a Lytro Illum, from which we extract about 30,000 patches in total. To the best of our knowledge, this is the first mid-size dataset for light-field images. Our main goal is to investigate whether the additional information in a light-field (such as multiple sub-aperture views and view-dependent reflectance effects) can aid material recognition. Since recognition networks have not been trained on 4D images before, we propose and compare several novel CNN architectures to train on light-field images. In our experiments, the best performing CNN architecture achieves a 7% boost compared with 2D image classification (70% to 77%). These results constitute important baselines that can spur further research in the use of CNNs for light-field applications. Upon publication, our dataset also enables other novel applications of light-fields, including object detection, image segmentation and view interpolation.Comment: European Conference on Computer Vision (ECCV) 201

    Radiometric Scene Decomposition: Estimating Complex Re ectance and Natural Illumination from Images

    Get PDF
    The phrase, "a picture is worth a thousand words," is often used to emphasize the wealth of information encoded into an image. While much of this information (e.g., the identities of people in an image, the type and number of objects in an image, etc.) is readily inferred by humans, fully understanding an image is still extremely difficult for computers. One important set of information encoded into images are radiometric scene properties---the properties of a scene related to light. Each pixel in an image indicates the amount of light received by the camera after being reflected, transmitted, or emitted by objects in a scene. It follows that we can learn about the objects of the scene and the scene itself through the image by thinking about the interaction between light and geometry in a scene. The appearance of objects in an image is primarily due to three factors: the geometry of the scene, the reflectance of the surfaces, and the incident illumination of the scene. Recovering these hidden properties of scenes can give us a deep understanding of a scene. For example, the reflectance of a surface can give a hint at the material properties of that surface. In this thesis, we address the question of how to recover complex, spatially-varying reflectance functions and natural illumination in real scenes from one or more images with known or approximately-known geometry. Recovering latent radiometric properties from images is difficult because of the severe underdetermined nature of the problem (i.e., there are many potential combinations of reflectance, light, and geometry that would produce identical input images) combined with the overwhelming dimensionality of the problem. In the real world, reflectance functions are complex, requiring many parameters to accurately model. An important aspect of solving this problem is to create a compact mathematical model to express a wide range of surface reflectance. We must also carefully model scene illumination, which typically exhibits complex behavior as well. Prior work has often simply assumed the light incident to a scene is made up of one or more infinitely-distant point lights. This assumption, however, rarely holds up in practice as not only are scenes illuminated by every possible direction, they are also illuminated by other objects interreflecting one another. To accurately infer reflectance and illumination of real-world scenes, we must account for the real-world behavior of reflectance and illumination. In this work, we develop a mathematical framework for the inference of complex, spatially-varying reflectance and natural illumination in real-world scenes. We use a Bayesian approach, where the radiometric properties (i.e., reflectance and illumination) to be inferred are modeled as random variables. We can then apply statistical priors to model how reflectance and illumination often exist in the real world to help combat the ambiguities created through the image formation process. We use our framework to infer the reflectance and illumination in a variety of scenes, ultimately using it in unrestricted real-world scenes. We show that the framework is capable of recovering complex reflectance and natural illumination in the real world.Ph.D., Computer Science -- Drexel University, 201

    Natural Illumination from Multiple Materials Using Deep Learning

    No full text
    Recovering natural illumination from a single Low-Dynamic Range (LDR) image is a challenging task. To remedy this situation we exploit two properties often found in everyday images. First, images rarely show a single material, but rather multiple ones that all reflect the same illumination. However, the appearance of each material is observed only for some surface orientations, not all. Second, parts of the illumination are often directly observed in the background, without being affected by reflection. Typically, this directly observed part of the illumination is even smaller. We propose a deep Convolutional Neural Network (CNN) that combines prior knowledge about the statistics of illumination and reflectance with an input that makes explicit use of these two observations. Our approach maps multiple partial LDR material observations represented as reflectance maps and a background image to a spherical High-Dynamic Range (HDR) illumination map. For training and testing we propose a new data set comprising of synthetic and real images with multiple materials observed under the same illumination. Qualitative and quantitative evidence shows how both multi-material and using a background are essential to improve illumination estimations

    Towards Scalable Multi-View Reconstruction of Geometry and Materials

    Full text link
    In this paper, we propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes that exceed object-scale and hence cannot be captured with stationary light stages. The input are high-resolution RGB-D images captured by a mobile, hand-held capture system with point lights for active illumination. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. To facilitate scalability to large numbers of observation views and optimization variables, we introduce a distributed optimization algorithm that reconstructs 2.5D keyframe-based representations of the scene. A novel multi-view consistency regularizer effectively synchronizes neighboring keyframes such that the local optimization results allow for seamless integration into a globally consistent 3D model. We provide a study on the importance of each component in our formulation and show that our method compares favorably to baselines. We further demonstrate that our method accurately reconstructs various objects and materials and allows for expansion to spatially larger scenes. We believe that this work represents a significant step towards making geometry and material estimation from hand-held scanners scalable

    What Is Around the Camera?

    Get PDF
    How much does a single image reveal about the environment it was taken in? In this paper, we investigate how much of that information can be retrieved from a foreground object, combined with the background (i.e. the visible part of the environment). Assuming it is not perfectly diffuse, the foreground object acts as a complexly shaped and far-from-perfect mirror An additional challenge is that its appearance confounds the light coming from the environment with the unknown materials it is made of. We propose a learning-based approach to predict the environment from multiple reflectance maps that are computed from approximate surface normals. The proposed method allows us to jointly model the statistics of environments and material properties. We train our system from synthesized training data, but demonstrate its applicability to real-world data. Interestingly, our analysis shows that the information obtained from objects made out of multiple materials often is complementary and leads to better performance
    corecore