9 research outputs found
EMLight: Lighting Estimation via Spherical Distribution Approximation
Illumination estimation from a single image is critical in 3D rendering and
it has been investigated extensively in the computer vision and computer
graphic research community. On the other hand, existing works estimate
illumination by either regressing light parameters or generating illumination
maps that are often hard to optimize or tend to produce inaccurate predictions.
We propose Earth Mover Light (EMLight), an illumination estimation framework
that leverages a regression network and a neural projector for accurate
illumination estimation. We decompose the illumination map into spherical light
distribution, light intensity and the ambient term, and define the illumination
estimation as a parameter regression task for the three illumination
components. Motivated by the Earth Mover distance, we design a novel spherical
mover's loss that guides to regress light distribution parameters accurately by
taking advantage of the subtleties of spherical distribution. Under the
guidance of the predicted spherical distribution, light intensity and ambient
term, the neural projector synthesizes panoramic illumination maps with
realistic light frequency. Extensive experiments show that EMLight achieves
accurate illumination estimation and the generated relighting in 3D object
embedding exhibits superior plausibility and fidelity as compared with
state-of-the-art methods.Comment: Accepted to AAAI 202
GMLight: Lighting Estimation via Geometric Distribution Approximation
Lighting estimation from a single image is an essential yet challenging task
in computer vision and computer graphics. Existing works estimate lighting by
regressing representative illumination parameters or generating illumination
maps directly. However, these methods often suffer from poor accuracy and
generalization. This paper presents Geometric Mover's Light (GMLight), a
lighting estimation framework that employs a regression network and a
generative projector for effective illumination estimation. We parameterize
illumination scenes in terms of the geometric light distribution, light
intensity, ambient term, and auxiliary depth, and estimate them as a pure
regression task. Inspired by the earth mover's distance, we design a novel
geometric mover's loss to guide the accurate regression of light distribution
parameters. With the estimated lighting parameters, the generative projector
synthesizes panoramic illumination maps with realistic appearance and
frequency. Extensive experiments show that GMLight achieves accurate
illumination estimation and superior fidelity in relighting for 3D object
insertion.Comment: 12 pages, 11 figures. arXiv admin note: text overlap with
arXiv:2012.1111
Free-viewpoint Indoor Neural Relighting from Multi-view Stereo
We introduce a neural relighting algorithm for captured indoors scenes, that
allows interactive free-viewpoint navigation. Our method allows illumination to
be changed synthetically, while coherently rendering cast shadows and complex
glossy materials. We start with multiple images of the scene and a 3D mesh
obtained by multi-view stereo (MVS) reconstruction. We assume that lighting is
well-explained as the sum of a view-independent diffuse component and a
view-dependent glossy term concentrated around the mirror reflection direction.
We design a convolutional network around input feature maps that facilitate
learning of an implicit representation of scene materials and illumination,
enabling both relighting and free-viewpoint navigation. We generate these input
maps by exploiting the best elements of both image-based and physically-based
rendering. We sample the input views to estimate diffuse scene irradiance, and
compute the new illumination caused by user-specified light sources using path
tracing. To facilitate the network's understanding of materials and synthesize
plausible glossy reflections, we reproject the views and compute mirror images.
We train the network on a synthetic dataset where each scene is also
reconstructed with MVS. We show results of our algorithm relighting real indoor
scenes and performing free-viewpoint navigation with complex and realistic
glossy reflections, which so far remained out of reach for view-synthesis
techniques
NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination
We address the problem of recovering the shape and spatially-varying
reflectance of an object from multi-view images (and their camera poses) of an
object illuminated by one unknown lighting condition. This enables the
rendering of novel views of the object under arbitrary environment lighting and
editing of the object's material properties. The key to our approach, which we
call Neural Radiance Factorization (NeRFactor), is to distill the volumetric
geometry of a Neural Radiance Field (NeRF) [Mildenhall et al. 2020]
representation of the object into a surface representation and then jointly
refine the geometry while solving for the spatially-varying reflectance and
environment lighting. Specifically, NeRFactor recovers 3D neural fields of
surface normals, light visibility, albedo, and Bidirectional Reflectance
Distribution Functions (BRDFs) without any supervision, using only a
re-rendering loss, simple smoothness priors, and a data-driven BRDF prior
learned from real-world BRDF measurements. By explicitly modeling light
visibility, NeRFactor is able to separate shadows from albedo and synthesize
realistic soft or hard shadows under arbitrary lighting conditions. NeRFactor
is able to recover convincing 3D models for free-viewpoint relighting in this
challenging and underconstrained capture setup for both synthetic and real
scenes. Qualitative and quantitative experiments show that NeRFactor
outperforms classic and deep learning-based state of the art across various
tasks. Our videos, code, and data are available at
people.csail.mit.edu/xiuming/projects/nerfactor/.Comment: Camera-ready version for SIGGRAPH Asia 2021. Project Page:
https://people.csail.mit.edu/xiuming/projects/nerfactor
Estimating Reflectance Properties and Reilluminating Scenes Using Physically Based Rendering and Deep Neural Networks
Estimating material properties and modeling the appearance of an object under varying illumination conditions is a complex process. In this thesis, we address the problem by proposing a novel framework to re-illuminate scenes by recovering the reflectance properties. Uniquely, following a divide-and-conquer approach, we recast the problem into its two constituent sub-problems.
In the first sub-problem, we have developed a synthetic dataset of spheres with realistic materials. The dataset has a wide range of material properties, rendered from varying viewpoints and under fixed directional light. Images from the dataset are further processed and used as reflectance maps used during the training process of the network.
In the second sub-problem, reflectance maps are created for scenes by reorganizing the outgoing radiances recorded in the multi-view images. The network trained on the synthetic dataset, is used to infer the material properties of the reflectance maps, acquired for the test scenes. These predictions are reused to relight the scenes from novel viewpoints and different lighting conditions using path tracing.
A number of experiments are conducted and performances are reported using different metrics to justify our design decisions and the choice of our network. We also show that, using multi-view images, the camera properties and the geometry of a scene, our technique can successfully predict the reflectance properties using our trained network within seconds. In the end, we also present the visual results of re-illumination on several scenes under different lighting conditions