1,737 research outputs found
DeepShadow: Neural Shape from Shadow
This paper presents DeepShadow, a one-shot method for recovering the depth
map and surface normals from photometric stereo shadow maps. Previous works
that try to recover the surface normals from photometric stereo images treat
cast shadows as a disturbance. We show that the self and cast shadows not only
do not disturb 3D reconstruction, but can be used alone, as a strong learning
signal, to recover the depth map and surface normals. We demonstrate that 3D
reconstruction from shadows can even outperform shape-from-shading in certain
cases. To the best of our knowledge, our method is the first to reconstruct 3D
shape-from-shadows using neural networks. The method does not require any
pre-training or expensive labeled data, and is optimized during inference time
Photometric Depth Super-Resolution
This study explores the use of photometric techniques (shape-from-shading and
uncalibrated photometric stereo) for upsampling the low-resolution depth map
from an RGB-D sensor to the higher resolution of the companion RGB image. A
single-shot variational approach is first put forward, which is effective as
long as the target's reflectance is piecewise-constant. It is then shown that
this dependency upon a specific reflectance model can be relaxed by focusing on
a specific class of objects (e.g., faces), and delegate reflectance estimation
to a deep neural network. A multi-shot strategy based on randomly varying
lighting conditions is eventually discussed. It requires no training or prior
on the reflectance, yet this comes at the price of a dedicated acquisition
setup. Both quantitative and qualitative evaluations illustrate the
effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence
(T-PAMI), 2019. First three authors contribute equall
Depth Super-Resolution Meets Uncalibrated Photometric Stereo
A novel depth super-resolution approach for RGB-D sensors is presented. It
disambiguates depth super-resolution through high-resolution photometric clues
and, symmetrically, it disambiguates uncalibrated photometric stereo through
low-resolution depth cues. To this end, an RGB-D sequence is acquired from the
same viewing angle, while illuminating the scene from various uncalibrated
directions. This sequence is handled by a variational framework which fits
high-resolution shape and reflectance, as well as lighting, to both the
low-resolution depth measurements and the high-resolution RGB ones. The key
novelty consists in a new PDE-based photometric stereo regularizer which
implicitly ensures surface regularity. This allows to carry out depth
super-resolution in a purely data-driven manner, without the need for any
ad-hoc prior or material calibration. Real-world experiments are carried out
using an out-of-the-box RGB-D sensor and a hand-held LED light source.Comment: International Conference on Computer Vision (ICCV) Workshop, 201
Photometric stereo for strong specular highlights
Photometric stereo (PS) is a fundamental technique in computer vision known
to produce 3-D shape with high accuracy. The setting of PS is defined by using
several input images of a static scene taken from one and the same camera
position but under varying illumination. The vast majority of studies in this
3-D reconstruction method assume orthographic projection for the camera model.
In addition, they mainly consider the Lambertian reflectance model as the way
that light scatters at surfaces. So, providing reliable PS results from real
world objects still remains a challenging task. We address 3-D reconstruction
by PS using a more realistic set of assumptions combining for the first time
the complete Blinn-Phong reflectance model and perspective projection. To this
end, we will compare two different methods of incorporating the perspective
projection into our model. Experiments are performed on both synthetic and real
world images. Note that our real-world experiments do not benefit from
laboratory conditions. The results show the high potential of our method even
for complex real world applications such as medical endoscopy images which may
include high amounts of specular highlights
Single-image RGB Photometric Stereo With Spatially-varying Albedo
We present a single-shot system to recover surface geometry of objects with
spatially-varying albedos, from images captured under a calibrated RGB
photometric stereo setup---with three light directions multiplexed across
different color channels in the observed RGB image. Since the problem is
ill-posed point-wise, we assume that the albedo map can be modeled as
piece-wise constant with a restricted number of distinct albedo values. We show
that under ideal conditions, the shape of a non-degenerate local constant
albedo surface patch can theoretically be recovered exactly. Moreover, we
present a practical and efficient algorithm that uses this model to robustly
recover shape from real images. Our method first reasons about shape locally in
a dense set of patches in the observed image, producing shape distributions for
every patch. These local distributions are then combined to produce a single
consistent surface normal map. We demonstrate the efficacy of the approach
through experiments on both synthetic renderings as well as real captured
images.Comment: 3DV 2016. Project page at http://www.ttic.edu/chakrabarti/rgbps
Variational Uncalibrated Photometric Stereo under General Lighting
Photometric stereo (PS) techniques nowadays remain constrained to an ideal
laboratory setup where modeling and calibration of lighting is amenable. To
eliminate such restrictions, we propose an efficient principled variational
approach to uncalibrated PS under general illumination. To this end, the
Lambertian reflectance model is approximated through a spherical harmonic
expansion, which preserves the spatial invariance of the lighting. The joint
recovery of shape, reflectance and illumination is then formulated as a single
variational problem. There the shape estimation is carried out directly in
terms of the underlying perspective depth map, thus implicitly ensuring
integrability and bypassing the need for a subsequent normal integration. To
tackle the resulting nonconvex problem numerically, we undertake a two-phase
procedure to initialize a balloon-like perspective depth map, followed by a
"lagged" block coordinate descent scheme. The experiments validate efficiency
and robustness of this approach. Across a variety of evaluations, we are able
to reduce the mean angular error consistently by a factor of 2-3 compared to
the state-of-the-art.Comment: Haefner and Ye contributed equall
Linear Differential Constraints for Photo-polarimetric Height Estimation
In this paper we present a differential approach to photo-polarimetric shape
estimation. We propose several alternative differential constraints based on
polarisation and photometric shading information and show how to express them
in a unified partial differential system. Our method uses the image ratios
technique to combine shading and polarisation information in order to directly
reconstruct surface height, without first computing surface normal vectors.
Moreover, we are able to remove the non-linearities so that the problem reduces
to solving a linear differential problem. We also introduce a new method for
estimating a polarisation image from multichannel data and, finally, we show it
is possible to estimate the illumination directions in a two source setup,
extending the method into an uncalibrated scenario. From a numerical point of
view, we use a least-squares formulation of the discrete version of the
problem. To the best of our knowledge, this is the first work to consider a
unified differential approach to solve photo-polarimetric shape estimation
directly for height. Numerical results on synthetic and real-world data confirm
the effectiveness of our proposed method.Comment: To appear at International Conference on Computer Vision (ICCV),
Venice, Italy, October 22-29, 201
- …