912 research outputs found
Free-viewpoint Indoor Neural Relighting from Multi-view Stereo
We introduce a neural relighting algorithm for captured indoors scenes, that
allows interactive free-viewpoint navigation. Our method allows illumination to
be changed synthetically, while coherently rendering cast shadows and complex
glossy materials. We start with multiple images of the scene and a 3D mesh
obtained by multi-view stereo (MVS) reconstruction. We assume that lighting is
well-explained as the sum of a view-independent diffuse component and a
view-dependent glossy term concentrated around the mirror reflection direction.
We design a convolutional network around input feature maps that facilitate
learning of an implicit representation of scene materials and illumination,
enabling both relighting and free-viewpoint navigation. We generate these input
maps by exploiting the best elements of both image-based and physically-based
rendering. We sample the input views to estimate diffuse scene irradiance, and
compute the new illumination caused by user-specified light sources using path
tracing. To facilitate the network's understanding of materials and synthesize
plausible glossy reflections, we reproject the views and compute mirror images.
We train the network on a synthetic dataset where each scene is also
reconstructed with MVS. We show results of our algorithm relighting real indoor
scenes and performing free-viewpoint navigation with complex and realistic
glossy reflections, which so far remained out of reach for view-synthesis
techniques
Efficient Multi-View Inverse Rendering Using a Hybrid Differentiable Rendering Method
Recovering the shape and appearance of real-world objects from natural 2D
images is a long-standing and challenging inverse rendering problem. In this
paper, we introduce a novel hybrid differentiable rendering method to
efficiently reconstruct the 3D geometry and reflectance of a scene from
multi-view images captured by conventional hand-held cameras. Our method
follows an analysis-by-synthesis approach and consists of two phases. In the
initialization phase, we use traditional SfM and MVS methods to reconstruct a
virtual scene roughly matching the real scene. Then in the optimization phase,
we adopt a hybrid approach to refine the geometry and reflectance, where the
geometry is first optimized using an approximate differentiable rendering
method, and the reflectance is optimized afterward using a physically-based
differentiable rendering method. Our hybrid approach combines the efficiency of
approximate methods with the high-quality results of physically-based methods.
Extensive experiments on synthetic and real data demonstrate that our method
can produce reconstructions with similar or higher quality than
state-of-the-art methods while being more efficient.Comment: IJCAI202
Parallel hierarchical global illumination
Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, we have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations
Factorized Inverse Path Tracing for Efficient and Accurate Material-Lighting Estimation
Inverse path tracing has recently been applied to joint material and lighting
estimation, given geometry and multi-view HDR observations of an indoor scene.
However, it has two major limitations: path tracing is expensive to compute,
and ambiguities exist between reflection and emission. Our Factorized Inverse
Path Tracing (FIPT) addresses these challenges by using a factored light
transport formulation and finds emitters driven by rendering errors. Our
algorithm enables accurate material and lighting optimization faster than
previous work, and is more effective at resolving ambiguities. The exhaustive
experiments on synthetic scenes show that our method (1) outperforms
state-of-the-art indoor inverse rendering and relighting methods particularly
in the presence of complex illumination effects; (2) speeds up inverse path
tracing optimization to less than an hour. We further demonstrate robustness to
noisy inputs through material and lighting estimates that allow plausible
relighting in a real scene. The source code is available at:
https://github.com/lwwu2/fiptComment: Updated experiment results; modified real-world section
Towards Predictive Rendering in Virtual Reality
The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation
A General Two-Pass Method Integrating Specular and Diffuse Reflection
International audienceWe analyse some recent approaches to the global illumination problem by introducing the corresponding reflection operators, and we demonstrate the advantages of a two-pass method. A generalization of the system introduced by Wallace et al. at Siggraph '87 to integrate diffuse as well as specular effects is presented. It is based on the calculation of extended form-factors, which allows arbitrary geometries to be used in the scene description, as well as refraction effects. We also present a new sampling method for the calculation of form-factors, which is an Mternative to the hemi-cube technique introduced by Cohen and Greenberg for radiosity calculations. This method is particularly well suited to the extended form-factors calculation. The problem of interactive display of the picture being created is also addressed by using hardware-assisted projections and image composition to recreate a complete specular view of the scene
Neural Free-Viewpoint Relighting for Glossy Indirect Illumination
Precomputed Radiance Transfer (PRT) remains an attractive solution for
real-time rendering of complex light transport effects such as glossy global
illumination. After precomputation, we can relight the scene with new
environment maps while changing viewpoint in real-time. However, practical PRT
methods are usually limited to low-frequency spherical harmonic lighting.
All-frequency techniques using wavelets are promising but have so far had
little practical impact. The curse of dimensionality and much higher data
requirements have typically limited them to relighting with fixed view or only
direct lighting with triple product integrals. In this paper, we demonstrate a
hybrid neural-wavelet PRT solution to high-frequency indirect illumination,
including glossy reflection, for relighting with changing view. Specifically,
we seek to represent the light transport function in the Haar wavelet basis.
For global illumination, we learn the wavelet transport using a small
multi-layer perceptron (MLP) applied to a feature field as a function of
spatial location and wavelet index, with reflected direction and material
parameters being other MLP inputs. We optimize/learn the feature field
(compactly represented by a tensor decomposition) and MLP parameters from
multiple images of the scene under different lighting and viewing conditions.
We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed
rendering of challenging scenes involving view-dependent reflections and even
caustics.Comment: 13 pages, 9 figures, to appear in cgf proceedings of egsr 202
Perturbation methods for interactive specular reflections
We describe an approach for interactively approximating specular reflections in arbitrary curved surfaces. The technique is applicable to any smooth implicitly defined reflecting surface that is equipped with a ray intersection procedure; it is also extremely efficient as it employs local perturbations to interpolate point samples analytically. After ray tracing a sparse set of reflection paths with respect to a given vantage point and static reflecting surfaces, the algorithm rapidly approximates reflections of arbitrary points in 3-space by expressing them as perturbations of nearby points with known reflections. The reflection of each new point is approximated to second-order accuracy by applying a closed-form perturbation formula to one or more nearby reflection paths. This formula is derived from the Taylor expansion of a reflection path and is based on first and second-order path derivatives. After preprocessing, the approach is fast enough to compute reflections of tessellated diffuse objects in arbitrary curved surfaces at interactive rates using standard graphics hardware. The resulting images are nearly indistinguishable from ray traced images that take several orders of magnitude longer to generate
NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination
We address the problem of recovering the shape and spatially-varying
reflectance of an object from multi-view images (and their camera poses) of an
object illuminated by one unknown lighting condition. This enables the
rendering of novel views of the object under arbitrary environment lighting and
editing of the object's material properties. The key to our approach, which we
call Neural Radiance Factorization (NeRFactor), is to distill the volumetric
geometry of a Neural Radiance Field (NeRF) [Mildenhall et al. 2020]
representation of the object into a surface representation and then jointly
refine the geometry while solving for the spatially-varying reflectance and
environment lighting. Specifically, NeRFactor recovers 3D neural fields of
surface normals, light visibility, albedo, and Bidirectional Reflectance
Distribution Functions (BRDFs) without any supervision, using only a
re-rendering loss, simple smoothness priors, and a data-driven BRDF prior
learned from real-world BRDF measurements. By explicitly modeling light
visibility, NeRFactor is able to separate shadows from albedo and synthesize
realistic soft or hard shadows under arbitrary lighting conditions. NeRFactor
is able to recover convincing 3D models for free-viewpoint relighting in this
challenging and underconstrained capture setup for both synthetic and real
scenes. Qualitative and quantitative experiments show that NeRFactor
outperforms classic and deep learning-based state of the art across various
tasks. Our videos, code, and data are available at
people.csail.mit.edu/xiuming/projects/nerfactor/.Comment: Camera-ready version for SIGGRAPH Asia 2021. Project Page:
https://people.csail.mit.edu/xiuming/projects/nerfactor
- …