30 research outputs found

    Object-based Illumination Estimation with Rendering-aware Neural Networks

    Full text link
    We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas. Conventional inverse rendering is too computationally demanding for real-time applications, and the performance of purely learning-based techniques may be limited by the meager input data available from individual objects. To address these issues, we propose an approach that takes advantage of physical principles from inverse rendering to constrain the solution, while also utilizing neural networks to expedite the more computationally expensive portions of its processing, to increase robustness to noisy input data as well as to improve temporal and spatial stability. This results in a rendering-aware system that estimates the local illumination distribution at an object with high accuracy and in real time. With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene, leading to improved realism.Comment: ECCV 202

    Compact and intuitive data-driven BRDF models

    No full text
    Measured materials are rapidly becoming a core component in the photo-realistic image synthesis pipeline. The reason is that data-driven models can easily capture the underlying, fine details that represent the visual appearance of materials, which can be difficult or even impossible to model by hand. There are, however, a number of key challenges that need to be solved in order to enable efficient capture, representation and interaction with real materials. This paper presents two new data-driven BRDF models specifically designed for 1D separability. The proposed 3D and 2D BRDF representations can be factored into three or two 1D factors, respectively, while accurately representing the underlying BRDF data with only small approximation error. We evaluate the models using different parameterizations with different characteristics and show that both the BRDF data itself and the resulting renderings yield more accurate results in terms of both numerical errors and visual results compared to previous approaches. To demonstrate the benefit of the proposed factored models, we present a new Monte Carlo importance sampling scheme and give examples of how they can be used for efficient BRDF capture and intuitive editing of measured materials. © 2019, The Author(s)

    HDR image reconstruction from a single exposure using deep CNNs

    No full text
    Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods

    Real-time noise-aware tone mapping

    No full text

    Blind video temporal consistency

    No full text
    corecore