852 research outputs found
DIP: Differentiable Interreflection-aware Physics-based Inverse Rendering
We present a physics-based inverse rendering method that learns the
illumination, geometry, and materials of a scene from posed multi-view RGB
images. To model the illumination of a scene, existing inverse rendering works
either completely ignore the indirect illumination or model it by coarse
approximations, leading to sub-optimal illumination, geometry, and material
prediction of the scene. In this work, we propose a physics-based illumination
model that explicitly traces the incoming indirect lights at each surface point
based on interreflection, followed by estimating each identified indirect light
through an efficient neural network. Furthermore, we utilize the Leibniz's
integral rule to resolve non-differentiability in the proposed illumination
model caused by one type of environment light -- the tangent lights. As a
result, the proposed interreflection-aware illumination model can be learned
end-to-end together with geometry and materials estimation. As a side product,
our physics-based inverse rendering model also facilitates flexible and
realistic material editing as well as relighting. Extensive experiments on both
synthetic and real-world datasets demonstrate that the proposed method performs
favorably against existing inverse rendering methods on novel view synthesis
and inverse rendering
Polarimetric Multi-View Inverse Rendering
A polarization camera has great potential for 3D reconstruction since the
angle of polarization (AoP) and the degree of polarization (DoP) of reflected
light are related to an object's surface normal. In this paper, we propose a
novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering
(Polarimetric MVIR) that effectively exploits geometric, photometric, and
polarimetric cues extracted from input multi-view color-polarization images. We
first estimate camera poses and an initial 3D model by geometric reconstruction
with a standard structure-from-motion and multi-view stereo pipeline. We then
refine the initial model by optimizing photometric rendering errors and
polarimetric errors using multi-view RGB, AoP, and DoP images, where we propose
a novel polarimetric cost function that enables an effective constraint on the
estimated surface normal of each vertex, while considering four possible
ambiguous azimuth angles revealed from the AoP measurement. The weight for the
polarimetric cost is effectively determined based on the DoP measurement, which
is regarded as the reliability of polarimetric information. Experimental
results using both synthetic and real data demonstrate that our Polarimetric
MVIR can reconstruct a detailed 3D shape without assuming a specific surface
material and lighting condition.Comment: Paper accepted in IEEE Transactions on Pattern Analysis and Machine
Intelligence (2022). arXiv admin note: substantial text overlap with
arXiv:2007.0883
Polarimetric Multi-View Inverse Rendering
A polarization camera has great potential for 3D reconstruction since the
angle of polarization (AoP) of reflected light is related to an object's
surface normal. In this paper, we propose a novel 3D reconstruction method
called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that
effectively exploits geometric, photometric, and polarimetric cues extracted
from input multi-view color polarization images. We first estimate camera poses
and an initial 3D model by geometric reconstruction with a standard
structure-from-motion and multi-view stereo pipeline. We then refine the
initial model by optimizing photometric and polarimetric rendering errors using
multi-view RGB and AoP images, where we propose a novel polarimetric rendering
cost function that enables us to effectively constrain each estimated surface
vertex's normal while considering four possible ambiguous azimuth angles
revealed from the AoP measurement. Experimental results using both synthetic
and real data demonstrate that our Polarimetric MVIR can reconstruct a detailed
3D shape without assuming a specific polarized reflection depending on the
material.Comment: Paper accepted in ECCV 202
MAIR: Multi-view Attention Inverse Rendering with 3D Spatially-Varying Lighting Estimation
We propose a scene-level inverse rendering framework that uses multi-view
images to decompose the scene into geometry, a SVBRDF, and 3D spatially-varying
lighting. Because multi-view images provide a variety of information about the
scene, multi-view images in object-level inverse rendering have been taken for
granted. However, owing to the absence of multi-view HDR synthetic dataset,
scene-level inverse rendering has mainly been studied using single-view image.
We were able to successfully perform scene-level inverse rendering using
multi-view images by expanding OpenRooms dataset and designing efficient
pipelines to handle multi-view images, and splitting spatially-varying
lighting. Our experiments show that the proposed method not only achieves
better performance than single-view-based methods, but also achieves robust
performance on unseen real-world scene. Also, our sophisticated 3D
spatially-varying lighting volume allows for photorealistic object insertion in
any 3D location.Comment: Accepted by CVPR 2023; Project Page is
https://bring728.github.io/mair.project
Inferring Fluid Dynamics via Inverse Rendering
Humans have a strong intuitive understanding of physical processes such as
fluid falling by just a glimpse of such a scene picture, i.e., quickly derived
from our immersive visual experiences in memory. This work achieves such a
photo-to-fluid-dynamics reconstruction functionality learned from unannotated
videos, without any supervision of ground-truth fluid dynamics. In a nutshell,
a differentiable Euler simulator modeled with a ConvNet-based pressure
projection solver, is integrated with a volumetric renderer, supporting
end-to-end/coherent differentiable dynamic simulation and rendering. By
endowing each sampled point with a fluid volume value, we derive a NeRF-like
differentiable renderer dedicated from fluid data; and thanks to this
volume-augmented representation, fluid dynamics could be inversely inferred
from the error signal between the rendered result and ground-truth video frame
(i.e., inverse rendering). Experiments on our generated Fluid Fall datasets and
DPI Dam Break dataset are conducted to demonstrate both effectiveness and
generalization ability of our method
- …