1,978 research outputs found
Photometric Depth Super-Resolution
This study explores the use of photometric techniques (shape-from-shading and
uncalibrated photometric stereo) for upsampling the low-resolution depth map
from an RGB-D sensor to the higher resolution of the companion RGB image. A
single-shot variational approach is first put forward, which is effective as
long as the target's reflectance is piecewise-constant. It is then shown that
this dependency upon a specific reflectance model can be relaxed by focusing on
a specific class of objects (e.g., faces), and delegate reflectance estimation
to a deep neural network. A multi-shot strategy based on randomly varying
lighting conditions is eventually discussed. It requires no training or prior
on the reflectance, yet this comes at the price of a dedicated acquisition
setup. Both quantitative and qualitative evaluations illustrate the
effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence
(T-PAMI), 2019. First three authors contribute equall
Self-Supervised Intrinsic Image Decomposition
Intrinsic decomposition from a single image is a highly challenging task, due
to its inherent ambiguity and the scarcity of training data. In contrast to
traditional fully supervised learning approaches, in this paper we propose
learning intrinsic image decomposition by explaining the input image. Our
model, the Rendered Intrinsics Network (RIN), joins together an image
decomposition pipeline, which predicts reflectance, shape, and lighting
conditions given a single image, with a recombination function, a learned
shading model used to recompose the original input based off of intrinsic image
predictions. Our network can then use unsupervised reconstruction error as an
additional signal to improve its intermediate representations. This allows
large-scale unlabeled data to be useful during training, and also enables
transferring learned knowledge to images of unseen object categories, lighting
conditions, and shapes. Extensive experiments demonstrate that our method
performs well on both intrinsic image decomposition and knowledge transfer.Comment: NIPS 2017 camera-ready version, project page:
http://rin.csail.mit.edu
SfSNet: Learning Shape, Reflectance and Illuminance of Faces in the Wild
We present SfSNet, an end-to-end learning framework for producing an accurate
decomposition of an unconstrained human face image into shape, reflectance and
illuminance. SfSNet is designed to reflect a physical lambertian rendering
model. SfSNet learns from a mixture of labeled synthetic and unlabeled real
world images. This allows the network to capture low frequency variations from
synthetic and high frequency details from real images through the photometric
reconstruction loss. SfSNet consists of a new decomposition architecture with
residual blocks that learns a complete separation of albedo and normal. This is
used along with the original image to predict lighting. SfSNet produces
significantly better quantitative and qualitative results than state-of-the-art
methods for inverse rendering and independent normal and illumination
estimation.Comment: Accepted to CVPR 2018 (Spotlight
Terrain analysis using radar shape-from-shading
This paper develops a maximum a posteriori (MAP) probability estimation framework for shape-from-shading (SFS) from synthetic aperture radar (SAR) images. The aim is to use this method to reconstruct surface topography from a single radar image of relatively complex terrain. Our MAP framework makes explicit how the recovery of local surface orientation depends on the whereabouts of terrain edge features and the available radar reflectance information. To apply the resulting process to real world radar data, we require probabilistic models for the appearance of terrain features and the relationship between the orientation of surface normals and the radar reflectance. We show that the SAR data can be modeled using a Rayleigh-Bessel distribution and use this distribution to develop a maximum likelihood algorithm for detecting and labeling terrain edge features. Moreover, we show how robust statistics can be used to estimate the characteristic parameters of this distribution. We also develop an empirical model for the SAR reflectance function. Using the reflectance model, we perform Lambertian correction so that a conventional SFS algorithm can be applied to the radar data. The initial surface normal direction is constrained to point in the direction of the nearest ridge or ravine feature. Each surface normal must fall within a conical envelope whose axis is in the direction of the radar illuminant. The extent of the envelope depends on the corrected radar reflectance and the variance of the radar signal statistics. We explore various ways of smoothing the field of surface normals using robust statistics. Finally, we show how to reconstruct the terrain surface from the smoothed field of surface normal vectors. The proposed algorithm is applied to various SAR data sets containing relatively complex terrain structure
Learning to Reconstruct Texture-less Deformable Surfaces from a Single View
Recent years have seen the development of mature solutions for reconstructing
deformable surfaces from a single image, provided that they are relatively
well-textured. By contrast, recovering the 3D shape of texture-less surfaces
remains an open problem, and essentially relates to Shape-from-Shading. In this
paper, we introduce a data-driven approach to this problem. We introduce a
general framework that can predict diverse 3D representations, such as meshes,
normals, and depth maps. Our experiments show that meshes are ill-suited to
handle texture-less 3D reconstruction in our context. Furthermore, we
demonstrate that our approach generalizes well to unseen objects, and that it
yields higher-quality reconstructions than a state-of-the-art SfS technique,
particularly in terms of normal estimates. Our reconstructions accurately model
the fine details of the surfaces, such as the creases of a T-Shirt worn by a
person.Comment: Accepted to 3DV 201
3D Face Reconstruction by Learning from Synthetic Data
Fast and robust three-dimensional reconstruction of facial geometric
structure from a single image is a challenging task with numerous applications.
Here, we introduce a learning-based approach for reconstructing a
three-dimensional face from a single image. Recent face recovery methods rely
on accurate localization of key characteristic points. In contrast, the
proposed approach is based on a Convolutional-Neural-Network (CNN) which
extracts the face geometry directly from its image. Although such deep
architectures outperform other models in complex computer vision problems,
training them properly requires a large dataset of annotated examples. In the
case of three-dimensional faces, currently, there are no large volume data
sets, while acquiring such big-data is a tedious task. As an alternative, we
propose to generate random, yet nearly photo-realistic, facial images for which
the geometric form is known. The suggested model successfully recovers facial
shapes from real images, even for faces with extreme expressions and under
various lighting conditions.Comment: The first two authors contributed equally to this wor
Deep Reflectance Maps
Undoing the image formation process and therefore decomposing appearance into
its intrinsic properties is a challenging task due to the under-constraint
nature of this inverse problem. While significant progress has been made on
inferring shape, materials and illumination from images only, progress in an
unconstrained setting is still limited. We propose a convolutional neural
architecture to estimate reflectance maps of specular materials in natural
lighting conditions. We achieve this in an end-to-end learning formulation that
directly predicts a reflectance map from the image itself. We show how to
improve estimates by facilitating additional supervision in an indirect scheme
that first predicts surface orientation and afterwards predicts the reflectance
map by a learning-based sparse data interpolation.
In order to analyze performance on this difficult task, we propose a new
challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg)
using both synthetic and real images. Furthermore, we show the application of
our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM
DeLight-Net: Decomposing Reflectance Maps into Specular Materials and Natural Illumination
In this paper we are extracting surface reflectance and natural environmental
illumination from a reflectance map, i.e. from a single 2D image of a sphere of
one material under one illumination. This is a notoriously difficult problem,
yet key to various re-rendering applications. With the recent advances in
estimating reflectance maps from 2D images their further decomposition has
become increasingly relevant.
To this end, we propose a Convolutional Neural Network (CNN) architecture to
reconstruct both material parameters (i.e. Phong) as well as illumination (i.e.
high-resolution spherical illumination maps), that is solely trained on
synthetic data. We demonstrate that decomposition of synthetic as well as real
photographs of reflectance maps, both in High Dynamic Range (HDR), and, for the
first time, on Low Dynamic Range (LDR) as well. Results are compared to
previous approaches quantitatively as well as qualitatively in terms of
re-renderings where illumination, material, view or shape are changed.Comment: Stamatios Georgoulis and Konstantinos Rematas contributed equally to
this wor
Joint Material and Illumination Estimation from Photo Sets in the Wild
Faithful manipulation of shape, material, and illumination in 2D Internet
images would greatly benefit from a reliable factorization of appearance into
material (i.e., diffuse and specular) and illumination (i.e., environment
maps). On the one hand, current methods that produce very high fidelity
results, typically require controlled settings, expensive devices, or
significant manual effort. To the other hand, methods that are automatic and
work on 'in the wild' Internet images, often extract only low-frequency
lighting or diffuse materials. In this work, we propose to make use of a set of
photographs in order to jointly estimate the non-diffuse materials and sharp
lighting in an uncontrolled setting. Our key observation is that seeing
multiple instances of the same material under different illumination (i.e.,
environment), and different materials under the same illumination provide
valuable constraints that can be exploited to yield a high-quality solution
(i.e., specular materials and environment illumination) for all the observed
materials and environments. Similar constraints also arise when observing
multiple materials in a single environment, or a single material across
multiple environments. The core of this approach is an optimization procedure
that uses two neural networks that are trained on synthetic images to predict
good gradients in parametric space given observation of reflected light. We
evaluate our method on a range of synthetic and real examples to generate
high-quality estimates, qualitatively compare our results against
state-of-the-art alternatives via a user study, and demonstrate
photo-consistent image manipulation that is otherwise very challenging to
achieve
- …