6,891 research outputs found
Terrain analysis using radar shape-from-shading
This paper develops a maximum a posteriori (MAP) probability estimation framework for shape-from-shading (SFS) from synthetic aperture radar (SAR) images. The aim is to use this method to reconstruct surface topography from a single radar image of relatively complex terrain. Our MAP framework makes explicit how the recovery of local surface orientation depends on the whereabouts of terrain edge features and the available radar reflectance information. To apply the resulting process to real world radar data, we require probabilistic models for the appearance of terrain features and the relationship between the orientation of surface normals and the radar reflectance. We show that the SAR data can be modeled using a Rayleigh-Bessel distribution and use this distribution to develop a maximum likelihood algorithm for detecting and labeling terrain edge features. Moreover, we show how robust statistics can be used to estimate the characteristic parameters of this distribution. We also develop an empirical model for the SAR reflectance function. Using the reflectance model, we perform Lambertian correction so that a conventional SFS algorithm can be applied to the radar data. The initial surface normal direction is constrained to point in the direction of the nearest ridge or ravine feature. Each surface normal must fall within a conical envelope whose axis is in the direction of the radar illuminant. The extent of the envelope depends on the corrected radar reflectance and the variance of the radar signal statistics. We explore various ways of smoothing the field of surface normals using robust statistics. Finally, we show how to reconstruct the terrain surface from the smoothed field of surface normal vectors. The proposed algorithm is applied to various SAR data sets containing relatively complex terrain structure
Deep Reflectance Maps
Undoing the image formation process and therefore decomposing appearance into
its intrinsic properties is a challenging task due to the under-constraint
nature of this inverse problem. While significant progress has been made on
inferring shape, materials and illumination from images only, progress in an
unconstrained setting is still limited. We propose a convolutional neural
architecture to estimate reflectance maps of specular materials in natural
lighting conditions. We achieve this in an end-to-end learning formulation that
directly predicts a reflectance map from the image itself. We show how to
improve estimates by facilitating additional supervision in an indirect scheme
that first predicts surface orientation and afterwards predicts the reflectance
map by a learning-based sparse data interpolation.
In order to analyze performance on this difficult task, we propose a new
challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg)
using both synthetic and real images. Furthermore, we show the application of
our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM
DeLight-Net: Decomposing Reflectance Maps into Specular Materials and Natural Illumination
In this paper we are extracting surface reflectance and natural environmental
illumination from a reflectance map, i.e. from a single 2D image of a sphere of
one material under one illumination. This is a notoriously difficult problem,
yet key to various re-rendering applications. With the recent advances in
estimating reflectance maps from 2D images their further decomposition has
become increasingly relevant.
To this end, we propose a Convolutional Neural Network (CNN) architecture to
reconstruct both material parameters (i.e. Phong) as well as illumination (i.e.
high-resolution spherical illumination maps), that is solely trained on
synthetic data. We demonstrate that decomposition of synthetic as well as real
photographs of reflectance maps, both in High Dynamic Range (HDR), and, for the
first time, on Low Dynamic Range (LDR) as well. Results are compared to
previous approaches quantitatively as well as qualitatively in terms of
re-renderings where illumination, material, view or shape are changed.Comment: Stamatios Georgoulis and Konstantinos Rematas contributed equally to
this wor
Analysis and approximation of some Shape-from-Shading models for non-Lambertian surfaces
The reconstruction of a 3D object or a scene is a classical inverse problem
in Computer Vision. In the case of a single image this is called the
Shape-from-Shading (SfS) problem and it is known to be ill-posed even in a
simplified version like the vertical light source case. A huge number of works
deals with the orthographic SfS problem based on the Lambertian reflectance
model, the most common and simplest model which leads to an eikonal type
equation when the light source is on the vertical axis. In this paper we want
to study non-Lambertian models since they are more realistic and suitable
whenever one has to deal with different kind of surfaces, rough or specular. We
will present a unified mathematical formulation of some popular orthographic
non-Lambertian models, considering vertical and oblique light directions as
well as different viewer positions. These models lead to more complex
stationary nonlinear partial differential equations of Hamilton-Jacobi type
which can be regarded as the generalization of the classical eikonal equation
corresponding to the Lambertian case. However, all the equations corresponding
to the models considered here (Oren-Nayar and Phong) have a similar structure
so we can look for weak solutions to this class in the viscosity solution
framework. Via this unified approach, we are able to develop a semi-Lagrangian
approximation scheme for the Oren-Nayar and the Phong model and to prove a
general convergence result. Numerical simulations on synthetic and real images
will illustrate the effectiveness of this approach and the main features of the
scheme, also comparing the results with previous results in the literature.Comment: Accepted version to Journal of Mathematical Imaging and Vision, 57
page
Joint Material and Illumination Estimation from Photo Sets in the Wild
Faithful manipulation of shape, material, and illumination in 2D Internet
images would greatly benefit from a reliable factorization of appearance into
material (i.e., diffuse and specular) and illumination (i.e., environment
maps). On the one hand, current methods that produce very high fidelity
results, typically require controlled settings, expensive devices, or
significant manual effort. To the other hand, methods that are automatic and
work on 'in the wild' Internet images, often extract only low-frequency
lighting or diffuse materials. In this work, we propose to make use of a set of
photographs in order to jointly estimate the non-diffuse materials and sharp
lighting in an uncontrolled setting. Our key observation is that seeing
multiple instances of the same material under different illumination (i.e.,
environment), and different materials under the same illumination provide
valuable constraints that can be exploited to yield a high-quality solution
(i.e., specular materials and environment illumination) for all the observed
materials and environments. Similar constraints also arise when observing
multiple materials in a single environment, or a single material across
multiple environments. The core of this approach is an optimization procedure
that uses two neural networks that are trained on synthetic images to predict
good gradients in parametric space given observation of reflected light. We
evaluate our method on a range of synthetic and real examples to generate
high-quality estimates, qualitatively compare our results against
state-of-the-art alternatives via a user study, and demonstrate
photo-consistent image manipulation that is otherwise very challenging to
achieve
- …