3,346 research outputs found
Linear Differential Constraints for Photo-polarimetric Height Estimation
In this paper we present a differential approach to photo-polarimetric shape
estimation. We propose several alternative differential constraints based on
polarisation and photometric shading information and show how to express them
in a unified partial differential system. Our method uses the image ratios
technique to combine shading and polarisation information in order to directly
reconstruct surface height, without first computing surface normal vectors.
Moreover, we are able to remove the non-linearities so that the problem reduces
to solving a linear differential problem. We also introduce a new method for
estimating a polarisation image from multichannel data and, finally, we show it
is possible to estimate the illumination directions in a two source setup,
extending the method into an uncalibrated scenario. From a numerical point of
view, we use a least-squares formulation of the discrete version of the
problem. To the best of our knowledge, this is the first work to consider a
unified differential approach to solve photo-polarimetric shape estimation
directly for height. Numerical results on synthetic and real-world data confirm
the effectiveness of our proposed method.Comment: To appear at International Conference on Computer Vision (ICCV),
Venice, Italy, October 22-29, 201
Single-image RGB Photometric Stereo With Spatially-varying Albedo
We present a single-shot system to recover surface geometry of objects with
spatially-varying albedos, from images captured under a calibrated RGB
photometric stereo setup---with three light directions multiplexed across
different color channels in the observed RGB image. Since the problem is
ill-posed point-wise, we assume that the albedo map can be modeled as
piece-wise constant with a restricted number of distinct albedo values. We show
that under ideal conditions, the shape of a non-degenerate local constant
albedo surface patch can theoretically be recovered exactly. Moreover, we
present a practical and efficient algorithm that uses this model to robustly
recover shape from real images. Our method first reasons about shape locally in
a dense set of patches in the observed image, producing shape distributions for
every patch. These local distributions are then combined to produce a single
consistent surface normal map. We demonstrate the efficacy of the approach
through experiments on both synthetic renderings as well as real captured
images.Comment: 3DV 2016. Project page at http://www.ttic.edu/chakrabarti/rgbps
A Novel Framework for Highlight Reflectance Transformation Imaging
We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa
Recovering facial shape using a statistical model of surface normal direction
In this paper, we show how a statistical model of facial shape can be embedded within a shape-from-shading algorithm. We describe how facial shape can be captured using a statistical model of variations in surface normal direction. To construct this model, we make use of the azimuthal equidistant projection to map the distribution of surface normals from the polar representation on a unit sphere to Cartesian points on a local tangent plane. The distribution of surface normal directions is captured using the covariance matrix for the projected point positions. The eigenvectors of the covariance matrix define the modes of shape-variation in the fields of transformed surface normals. We show how this model can be trained using surface normal data acquired from range images and how to fit the model to intensity images of faces using constraints on the surface normal direction provided by Lambert's law. We demonstrate that the combination of a global statistical constraint and local irradiance constraint yields an efficient and accurate approach to facial shape recovery and is capable of recovering fine local surface details. We assess the accuracy of the technique on a variety of images with ground truth and real-world images
Variational Uncalibrated Photometric Stereo under General Lighting
Photometric stereo (PS) techniques nowadays remain constrained to an ideal
laboratory setup where modeling and calibration of lighting is amenable. To
eliminate such restrictions, we propose an efficient principled variational
approach to uncalibrated PS under general illumination. To this end, the
Lambertian reflectance model is approximated through a spherical harmonic
expansion, which preserves the spatial invariance of the lighting. The joint
recovery of shape, reflectance and illumination is then formulated as a single
variational problem. There the shape estimation is carried out directly in
terms of the underlying perspective depth map, thus implicitly ensuring
integrability and bypassing the need for a subsequent normal integration. To
tackle the resulting nonconvex problem numerically, we undertake a two-phase
procedure to initialize a balloon-like perspective depth map, followed by a
"lagged" block coordinate descent scheme. The experiments validate efficiency
and robustness of this approach. Across a variety of evaluations, we are able
to reduce the mean angular error consistently by a factor of 2-3 compared to
the state-of-the-art.Comment: Haefner and Ye contributed equall
Joint Material and Illumination Estimation from Photo Sets in the Wild
Faithful manipulation of shape, material, and illumination in 2D Internet
images would greatly benefit from a reliable factorization of appearance into
material (i.e., diffuse and specular) and illumination (i.e., environment
maps). On the one hand, current methods that produce very high fidelity
results, typically require controlled settings, expensive devices, or
significant manual effort. To the other hand, methods that are automatic and
work on 'in the wild' Internet images, often extract only low-frequency
lighting or diffuse materials. In this work, we propose to make use of a set of
photographs in order to jointly estimate the non-diffuse materials and sharp
lighting in an uncontrolled setting. Our key observation is that seeing
multiple instances of the same material under different illumination (i.e.,
environment), and different materials under the same illumination provide
valuable constraints that can be exploited to yield a high-quality solution
(i.e., specular materials and environment illumination) for all the observed
materials and environments. Similar constraints also arise when observing
multiple materials in a single environment, or a single material across
multiple environments. The core of this approach is an optimization procedure
that uses two neural networks that are trained on synthetic images to predict
good gradients in parametric space given observation of reflected light. We
evaluate our method on a range of synthetic and real examples to generate
high-quality estimates, qualitatively compare our results against
state-of-the-art alternatives via a user study, and demonstrate
photo-consistent image manipulation that is otherwise very challenging to
achieve
- …