74 research outputs found
Image-based relighting using room lighting basis
We present a novel and practical approach for image-based relighting that employs the lights available in a regular room to acquire the reflectance field of an object. The lighting basis includes diverse light sources such as the house lights and the natural illumination coming from the windows. Once the data is captured, we homogenize the reflectance field to take into account the variety of light source colours to minimise the tone difference in the reflectance field. Additionally, we measure the room dark level corresponding to a small amount of global illumination with all lights switched off and blinds drawn. The dark level, due to some light leakage through the blinds, is removed from the individual local lighting basis conditions and employed as an additional global lighting basis. Finally we optimize the projection of a desired lighting environment on to our room lighting basis to get a close approximation of the environment with our sparse lighting basis. We achieve plausible results for diffuse and glossy objects that are qualitatively similar to results produced with dense sampling of the reflectance field including using a light stage and we demonstrate effective relighting results in two different room configurations. We believe our approach can be applied for practical relighting applications with general studio lighting
Self-supervised Outdoor Scene Relighting
Outdoor scene relighting is a challenging problem that requires good
understanding of the scene geometry, illumination and albedo. Current
techniques are completely supervised, requiring high quality synthetic
renderings to train a solution. Such renderings are synthesized using priors
learned from limited data. In contrast, we propose a self-supervised approach
for relighting. Our approach is trained only on corpora of images collected
from the internet without any user-supervision. This virtually endless source
of training data allows training a general relighting solution. Our approach
first decomposes an image into its albedo, geometry and illumination. A novel
relighting is then produced by modifying the illumination parameters. Our
solution capture shadow using a dedicated shadow prediction map, and does not
rely on accurate geometry estimation. We evaluate our technique subjectively
and objectively using a new dataset with ground-truth relighting. Results show
the ability of our technique to produce photo-realistic and physically
plausible results, that generalizes to unseen scenes.Comment: Published in ECCV '20,
http://gvv.mpi-inf.mpg.de/projects/SelfRelight
Relightable Neural Human Assets from Multi-view Gradient Illuminations
Human modeling and relighting are two fundamental problems in computer vision
and graphics, where high-quality datasets can largely facilitate related
research. However, most existing human datasets only provide multi-view human
images captured under the same illumination. Although valuable for modeling
tasks, they are not readily used in relighting problems. To promote research in
both fields, in this paper, we present UltraStage, a new 3D human dataset that
contains more than 2,000 high-quality human assets captured under both
multi-view and multi-illumination settings. Specifically, for each example, we
provide 32 surrounding views illuminated with one white light and two gradient
illuminations. In addition to regular multi-view images, gradient illuminations
help recover detailed surface normal and spatially-varying material maps,
enabling various relighting applications. Inspired by recent advances in neural
representation, we further interpret each example into a neural human asset
which allows novel view synthesis under arbitrary lighting conditions. We show
our neural human assets can achieve extremely high capture performance and are
capable of representing fine details such as facial wrinkles and cloth folds.
We also validate UltraStage in single image relighting tasks, training neural
networks with virtual relighted data from neural assets and demonstrating
realistic rendering improvements over prior arts. UltraStage will be publicly
available to the community to stimulate significant future developments in
various human modeling and rendering tasks. The dataset is available at
https://miaoing.github.io/RNHA.Comment: Project page: https://miaoing.github.io/RNH
OutCast: Outdoor Single-image Relighting with Cast Shadows
We propose a relighting method for outdoor images. Our method mainly focuses
on predicting cast shadows in arbitrary novel lighting directions from a single
image while also accounting for shading and global effects such the sun light
color and clouds. Previous solutions for this problem rely on reconstructing
occluder geometry, e.g. using multi-view stereo, which requires many images of
the scene. Instead, in this work we make use of a noisy off-the-shelf
single-image depth map estimation as a source of geometry. Whilst this can be a
good guide for some lighting effects, the resulting depth map quality is
insufficient for directly ray-tracing the shadows. Addressing this, we propose
a learned image space ray-marching layer that converts the approximate depth
map into a deep 3D representation that is fused into occlusion queries using a
learned traversal. Our proposed method achieves, for the first time,
state-of-the-art relighting results, with only a single image as input. For
supplementary material visit our project page at:
https://dgriffiths.uk/outcast.Comment: Eurographics 2022 - Accepte
Recommended from our members
Computational Cameras: Approaches, Benefits and Limits
A computational camera uses a combination of optics and software to produce images that cannot be taken with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras have been demonstrated - some designed to achieve new imaging functionalities and others to reduce the complexity of traditional imaging. In this article, we describe how computational cameras have evolved and present a taxonomy for the technical approaches they use. We explore the benefits and limits of computational imaging, and describe how it is related to the adjacent and overlapping fields of digital imaging, computational photography and computational image sensors
- …