348 research outputs found
EverLight: Indoor-Outdoor Editable HDR Lighting Estimation
Because of the diversity in lighting environments, existing illumination
estimation techniques have been designed explicitly on indoor or outdoor
environments. Methods have focused specifically on capturing accurate energy
(e.g., through parametric lighting models), which emphasizes shading and strong
cast shadows; or producing plausible texture (e.g., with GANs), which
prioritizes plausible reflections. Approaches which provide editable lighting
capabilities have been proposed, but these tend to be with simplified lighting
models, offering limited realism. In this work, we propose to bridge the gap
between these recent trends in the literature, and propose a method which
combines a parametric light model with 360{\deg} panoramas, ready to use as
HDRI in rendering engines. We leverage recent advances in GAN-based LDR
panorama extrapolation from a regular image, which we extend to HDR using
parametric spherical gaussians. To achieve this, we introduce a novel lighting
co-modulation method that injects lighting-related features throughout the
generator, tightly coupling the original or edited scene illumination within
the panorama generation process. In our representation, users can easily edit
light direction, intensity, number, etc. to impact shading while providing
rich, complex reflections while seamlessly blending with the edits.
Furthermore, our method encompasses indoor and outdoor environments,
demonstrating state-of-the-art results even when compared to domain-specific
methods.Comment: 11 pages, 7 figure
Cuboid-maps for indoor illumination modeling and augmented reality rendering
This thesis proposes a novel approach for indoor scene illumination modeling and augmented reality rendering. Our key observation is that an indoor scene is well represented by a set of rectangular spaces, where important illuminants reside on their boundary faces, such as a window on a wall or a ceiling light. Given a perspective image or a panorama and detected rectangular spaces as inputs, we estimate their cuboid shapes, and infer illumination components for each face of the cuboids by a simple convolutional neural architecture. The process turns an image into a set of cuboid environment maps, each of which is a simple extension of a traditional cube-map. For augmented reality rendering, we simply take a linear combination of inferred environment maps and an input image, producing surprisingly realistic illumination effects. This approach is simple and efficient, avoids flickering, and achieves quantitatively more accurate and qualitatively more realistic effects than competing substantially more complicated systems
Neural Illumination: Lighting Prediction for Indoor Environments
This paper addresses the task of estimating the light arriving from all
directions to a 3D point observed at a selected pixel in an RGB image. This
task is challenging because it requires predicting a mapping from a partial
scene observation by a camera to a complete illumination map for a selected
position, which depends on the 3D location of the selection, the distribution
of unobserved light sources, the occlusions caused by scene geometry, etc.
Previous methods attempt to learn this complex mapping directly using a single
black-box neural network, which often fails to estimate high-frequency lighting
details for scenes with complicated 3D geometry. Instead, we propose "Neural
Illumination" a new approach that decomposes illumination prediction into
several simpler differentiable sub-tasks: 1) geometry estimation, 2) scene
completion, and 3) LDR-to-HDR estimation. The advantage of this approach is
that the sub-tasks are relatively easy to learn and can be trained with direct
supervision, while the whole pipeline is fully differentiable and can be
fine-tuned with end-to-end supervision. Experiments show that our approach
performs significantly better quantitatively and qualitatively than prior work
- …