597 research outputs found
What Is Around The Camera?
How much does a single image reveal about the environment it was taken in? In
this paper, we investigate how much of that information can be retrieved from a
foreground object, combined with the background (i.e. the visible part of the
environment). Assuming it is not perfectly diffuse, the foreground object acts
as a complexly shaped and far-from-perfect mirror. An additional challenge is
that its appearance confounds the light coming from the environment with the
unknown materials it is made of. We propose a learning-based approach to
predict the environment from multiple reflectance maps that are computed from
approximate surface normals. The proposed method allows us to jointly model the
statistics of environments and material properties. We train our system from
synthesized training data, but demonstrate its applicability to real-world
data. Interestingly, our analysis shows that the information obtained from
objects made out of multiple materials often is complementary and leads to
better performance.Comment: Accepted to ICCV. Project:
http://homes.esat.kuleuven.be/~sgeorgou/multinatillum
Illumination Invariant Outdoor Perception
This thesis proposes the use of a multi-modal sensor approach to achieve illumination invariance in images taken in outdoor environments. The approach is automatic in that it does not require user input for initialisation, and is not reliant on the input of atmospheric radiative transfer models. While it is common to use pixel colour and intensity as features in high level vision algorithms, their performance is severely limited by the uncontrolled lighting and complex geometric structure of outdoor scenes. The appearance of a material is dependent on the incident illumination, which can vary due to spatial and temporal factors. This variability causes identical materials to appear differently depending on their location. Illumination invariant representations of the scene can potentially improve the performance of high level vision algorithms as they allow discrimination between pixels to occur based on the underlying material characteristics. The proposed approach to obtaining illumination invariance utilises fused image and geometric data. An approximation of the outdoor illumination is used to derive per-pixel scaling factors. This has the effect of relighting the entire scene using a single illuminant that is common in terms of colour and intensity for all pixels. The approach is extended to radiometric normalisation and the multi-image scenario, meaning that the resultant dataset is both spatially and temporally illumination invariant. The proposed illumination invariance approach is evaluated on several datasets and shows that spatial and temporal invariance can be achieved without loss of spectral dimensionality. The system requires very few tuning parameters, meaning that expert knowledge is not required in order for its operation. This has potential implications for robotics and remote sensing applications where perception systems play an integral role in developing a rich understanding of the scene
OutCast: Outdoor Single-image Relighting with Cast Shadows
We propose a relighting method for outdoor images. Our method mainly focuses
on predicting cast shadows in arbitrary novel lighting directions from a single
image while also accounting for shading and global effects such the sun light
color and clouds. Previous solutions for this problem rely on reconstructing
occluder geometry, e.g. using multi-view stereo, which requires many images of
the scene. Instead, in this work we make use of a noisy off-the-shelf
single-image depth map estimation as a source of geometry. Whilst this can be a
good guide for some lighting effects, the resulting depth map quality is
insufficient for directly ray-tracing the shadows. Addressing this, we propose
a learned image space ray-marching layer that converts the approximate depth
map into a deep 3D representation that is fused into occlusion queries using a
learned traversal. Our proposed method achieves, for the first time,
state-of-the-art relighting results, with only a single image as input. For
supplementary material visit our project page at:
https://dgriffiths.uk/outcast.Comment: Eurographics 2022 - Accepte
- …