1,398 research outputs found
What Is Around The Camera?
How much does a single image reveal about the environment it was taken in? In
this paper, we investigate how much of that information can be retrieved from a
foreground object, combined with the background (i.e. the visible part of the
environment). Assuming it is not perfectly diffuse, the foreground object acts
as a complexly shaped and far-from-perfect mirror. An additional challenge is
that its appearance confounds the light coming from the environment with the
unknown materials it is made of. We propose a learning-based approach to
predict the environment from multiple reflectance maps that are computed from
approximate surface normals. The proposed method allows us to jointly model the
statistics of environments and material properties. We train our system from
synthesized training data, but demonstrate its applicability to real-world
data. Interestingly, our analysis shows that the information obtained from
objects made out of multiple materials often is complementary and leads to
better performance.Comment: Accepted to ICCV. Project:
http://homes.esat.kuleuven.be/~sgeorgou/multinatillum
NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering
Recent advances in neural implicit fields enables rapidly reconstructing 3D
geometry from multi-view images. Beyond that, recovering physical properties
such as material and illumination is essential for enabling more applications.
This paper presents a new method that effectively learns relightable neural
surface using pre-intergrated rendering, which simultaneously learns geometry,
material and illumination within the neural implicit field. The key insight of
our work is that these properties are closely related to each other, and
optimizing them in a collaborative manner would lead to consistent
improvements. Specifically, we propose NeuS-PIR, a method that factorizes the
radiance field into a spatially varying material field and a differentiable
environment cubemap, and jointly learns it with geometry represented by neural
surface. Our experiments demonstrate that the proposed method outperforms the
state-of-the-art method in both synthetic and real datasets
Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes
We present a multi-view inverse rendering method for large-scale real-world
indoor scenes that reconstructs global illumination and physically-reasonable
SVBRDFs. Unlike previous representations, where the global illumination of
large scenes is simplified as multiple environment maps, we propose a compact
representation called Texture-based Lighting (TBL). It consists of 3D meshs and
HDR textures, and efficiently models direct and infinite-bounce indirect
lighting of the entire large scene. Based on TBL, we further propose a hybrid
lighting representation with precomputed irradiance, which significantly
improves the efficiency and alleviate the rendering noise in the material
optimization. To physically disentangle the ambiguity between materials, we
propose a three-stage material optimization strategy based on the priors of
semantic segmentation and room segmentation. Extensive experiments show that
the proposed method outperforms the state-of-the-arts quantitatively and
qualitatively, and enables physically-reasonable mixed-reality applications
such as material editing, editable novel view synthesis and relighting. The
project page is at https://lzleejean.github.io/TexIR.Comment: The project page is at: https://lzleejean.github.io/TexI
Neural-PBIR Reconstruction of Shape, Material, and Illumination
Reconstructing the shape and spatially varying surface appearances of a
physical-world object as well as its surrounding illumination based on 2D
images (e.g., photographs) of the object has been a long-standing problem in
computer vision and graphics. In this paper, we introduce a robust object
reconstruction pipeline combining neural based object reconstruction and
physics-based inverse rendering (PBIR). Specifically, our pipeline firstly
leverages a neural stage to produce high-quality but potentially imperfect
predictions of object shape, reflectance, and illumination. Then, in the later
stage, initialized by the neural predictions, we perform PBIR to refine the
initial results and obtain the final high-quality reconstruction. Experimental
results demonstrate our pipeline significantly outperforms existing
reconstruction methods quality-wise and performance-wise
Physically-Based Editing of Indoor Scene Lighting from a Single Image
We present a method to edit complex indoor lighting from a single image with
its predicted depth and light source segmentation masks. This is an extremely
challenging problem that requires modeling complex light transport, and
disentangling HDR lighting from material and geometry with only a partial LDR
observation of the scene. We tackle this problem using two novel components: 1)
a holistic scene reconstruction method that estimates scene reflectance and
parametric 3D lighting, and 2) a neural rendering framework that re-renders the
scene from our predictions. We use physically-based indoor light
representations that allow for intuitive editing, and infer both visible and
invisible light sources. Our neural rendering framework combines
physically-based direct illumination and shadow rendering with deep networks to
approximate global illumination. It can capture challenging lighting effects,
such as soft shadows, directional lighting, specular materials, and
interreflections. Previous single image inverse rendering methods usually
entangle scene lighting and geometry and only support applications like object
insertion. Instead, by combining parametric 3D lighting estimation with neural
scene rendering, we demonstrate the first automatic method to achieve full
scene relighting, including light source insertion, removal, and replacement,
from a single image. All source code and data will be publicly released
NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images
We present a neural rendering-based method called NeRO for reconstructing the
geometry and the BRDF of reflective objects from multiview images captured in
an unknown environment. Multiview reconstruction of reflective objects is
extremely challenging because specular reflections are view-dependent and thus
violate the multiview consistency, which is the cornerstone for most multiview
reconstruction methods. Recent neural rendering techniques can model the
interaction between environment lights and the object surfaces to fit the
view-dependent reflections, thus making it possible to reconstruct reflective
objects from multiview images. However, accurately modeling environment lights
in the neural rendering is intractable, especially when the geometry is
unknown. Most existing neural rendering methods, which can model environment
lights, only consider direct lights and rely on object masks to reconstruct
objects with weak specular reflections. Therefore, these methods fail to
reconstruct reflective objects, especially when the object mask is not
available and the object is illuminated by indirect lights. We propose a
two-step approach to tackle this problem. First, by applying the split-sum
approximation and the integrated directional encoding to approximate the
shading effects of both direct and indirect lights, we are able to accurately
reconstruct the geometry of reflective objects without any object masks. Then,
with the object geometry fixed, we use more accurate sampling to recover the
environment lights and the BRDF of the object. Extensive experiments
demonstrate that our method is capable of accurately reconstructing the
geometry and the BRDF of reflective objects from only posed RGB images without
knowing the environment lights and the object masks. Codes and datasets are
available at https://github.com/liuyuan-pal/NeRO.Comment: Accepted to SIGGRAPH 2023. Project page:
https://liuyuan-pal.github.io/NeRO/ Codes:
https://github.com/liuyuan-pal/NeR
Recent advances in transient imaging: A computer graphics and vision perspective
Transient imaging has recently made a huge impact in the computer graphics and computer vision fields. By capturing, reconstructing, or simulating light transport at extreme temporal resolutions, researchers have proposed novel techniques to show movies of light in motion, see around corners, detect objects in highly-scattering media, or infer material properties from a distance, to name a few. The key idea is to leverage the wealth of information in the temporal domain at the pico or nanosecond resolution, information usually lost during the capture-time temporal integration. This paper presents recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation
Recent advances in transient imaging: A computer graphics and vision perspective
Transient imaging has recently made a huge impact in the computer graphics and computer vision fields. By capturing, reconstructing, or simulating light transport at extreme temporal resolutions, researchers have proposed novel techniques to show movies of light in motion, see around corners, detect objects in highly-scattering media, or infer material properties from a distance, to name a few. The key idea is to leverage the wealth of information in the temporal domain at the pico or nanosecond resolution, information usually lost during the capture-time temporal integration. This paper presents recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation
SOL-NeRF:Sunlight Modeling for Outdoor Scene Decomposition and Relighting
Outdoor scenes often involve large-scale geometry and complex unknown lighting conditions, making it difficult to decompose them into geometry, reflectance and illumination. Recently researchers made attempts to decompose outdoor scenes using Neural Radiance Fields (NeRF) and learning-based lighting and shadow representations. However, diverse lighting conditions and shadows in outdoor scenes are challenging for learning-based models. Moreover, existing methods may produce rough geometry and normal reconstruction and introduce notable shading artifacts when the scene is rendered under a novel illumination. To solve the above problems, we propose SOL-NeRF to decompose outdoor scenes with the help of a hybrid lighting representation and a signed distance field geometry reconstruction. We use a single Spherical Gaussian (SG) lobe to approximate the sun lighting, and a first-order Spherical Harmonic (SH) mixture to resemble the sky lighting. This hybrid representation is specifically designed for outdoor settings, and compactly models the outdoor lighting, ensuring robustness and efficiency. The shadow of the direct sun lighting can be obtained by casting the ray against the mesh extracted from the signed distance field, and the remaining shadow can be approximated by Ambient Occlusion (AO). Additionally, sun lighting color prior and a relaxed Manhattan-world assumption can be further applied to boost decomposition and relighting performance. When changing the lighting condition, our method can produce consistent relighting results with correct shadow effects. Experiments conducted on our hybrid lighting scheme and the entire decomposition pipeline show that our method achieves better reconstruction, decomposition, and relighting performance compared to previous methods both quantitatively and qualitatively.</p
- …