3,542 research outputs found
Linear Differential Constraints for Photo-polarimetric Height Estimation
In this paper we present a differential approach to photo-polarimetric shape
estimation. We propose several alternative differential constraints based on
polarisation and photometric shading information and show how to express them
in a unified partial differential system. Our method uses the image ratios
technique to combine shading and polarisation information in order to directly
reconstruct surface height, without first computing surface normal vectors.
Moreover, we are able to remove the non-linearities so that the problem reduces
to solving a linear differential problem. We also introduce a new method for
estimating a polarisation image from multichannel data and, finally, we show it
is possible to estimate the illumination directions in a two source setup,
extending the method into an uncalibrated scenario. From a numerical point of
view, we use a least-squares formulation of the discrete version of the
problem. To the best of our knowledge, this is the first work to consider a
unified differential approach to solve photo-polarimetric shape estimation
directly for height. Numerical results on synthetic and real-world data confirm
the effectiveness of our proposed method.Comment: To appear at International Conference on Computer Vision (ICCV),
Venice, Italy, October 22-29, 201
Joint Material and Illumination Estimation from Photo Sets in the Wild
Faithful manipulation of shape, material, and illumination in 2D Internet
images would greatly benefit from a reliable factorization of appearance into
material (i.e., diffuse and specular) and illumination (i.e., environment
maps). On the one hand, current methods that produce very high fidelity
results, typically require controlled settings, expensive devices, or
significant manual effort. To the other hand, methods that are automatic and
work on 'in the wild' Internet images, often extract only low-frequency
lighting or diffuse materials. In this work, we propose to make use of a set of
photographs in order to jointly estimate the non-diffuse materials and sharp
lighting in an uncontrolled setting. Our key observation is that seeing
multiple instances of the same material under different illumination (i.e.,
environment), and different materials under the same illumination provide
valuable constraints that can be exploited to yield a high-quality solution
(i.e., specular materials and environment illumination) for all the observed
materials and environments. Similar constraints also arise when observing
multiple materials in a single environment, or a single material across
multiple environments. The core of this approach is an optimization procedure
that uses two neural networks that are trained on synthetic images to predict
good gradients in parametric space given observation of reflected light. We
evaluate our method on a range of synthetic and real examples to generate
high-quality estimates, qualitatively compare our results against
state-of-the-art alternatives via a user study, and demonstrate
photo-consistent image manipulation that is otherwise very challenging to
achieve
Photometric stereo for strong specular highlights
Photometric stereo (PS) is a fundamental technique in computer vision known
to produce 3-D shape with high accuracy. The setting of PS is defined by using
several input images of a static scene taken from one and the same camera
position but under varying illumination. The vast majority of studies in this
3-D reconstruction method assume orthographic projection for the camera model.
In addition, they mainly consider the Lambertian reflectance model as the way
that light scatters at surfaces. So, providing reliable PS results from real
world objects still remains a challenging task. We address 3-D reconstruction
by PS using a more realistic set of assumptions combining for the first time
the complete Blinn-Phong reflectance model and perspective projection. To this
end, we will compare two different methods of incorporating the perspective
projection into our model. Experiments are performed on both synthetic and real
world images. Note that our real-world experiments do not benefit from
laboratory conditions. The results show the high potential of our method even
for complex real world applications such as medical endoscopy images which may
include high amounts of specular highlights
NeISF: Neural Incident Stokes Field for Geometry and Material Estimation
Multi-view inverse rendering is the problem of estimating the scene
parameters such as shapes, materials, or illuminations from a sequence of
images captured under different viewpoints. Many approaches, however, assume
single light bounce and thus fail to recover challenging scenarios like
inter-reflections. On the other hand, simply extending those methods to
consider multi-bounced light requires more assumptions to alleviate the
ambiguity. To address this problem, we propose Neural Incident Stokes Fields
(NeISF), a multi-view inverse rendering framework that reduces ambiguities
using polarization cues. The primary motivation for using polarization cues is
that it is the accumulation of multi-bounced light, providing rich information
about geometry and material. Based on this knowledge, the proposed incident
Stokes field efficiently models the accumulated polarization effect with the
aid of an original physically-based differentiable polarimetric renderer.
Lastly, experimental results show that our method outperforms the existing
works in synthetic and real scenarios
Polarization imaging reflectometry in the wild
We present a novel approach for on-site acquisition of surface reflectance for planar, spatially varying, isotropic materials in uncontrolled outdoor environments. Our method exploits the naturally occuring linear polarization of incident illumination: by rotating a linear polarizing filter in front of a camera at 3 different orientations, we measure the linear polarization reflected off the sample and combine this information with multiview analysis and inverse rendering in order to recover per-pixel, high resolution reflectance maps. We exploit polarization both for diffuse/specular separation and surface normals estimation by combining polarization measurements from at least two near orthogonal views close to Brewster angle of incidence. We then use our estimates of surface normals and albedos in an inverse rendering framework to recover specular roughness. To the best of our knowledge, our method is the first to successfully extract a complete set of reflectance parameters with passive capture in completely uncontrolled outdoor environments
On-site surface reflectometry
The rapid development of Augmented Reality (AR) and Virtual Reality (VR)
applications over the past years has created the need to quickly and accurately scan
the real world to populate immersive, realistic virtual environments for the end
user to enjoy. While geometry processing has already gone a long way towards that
goal, with self-contained solutions commercially available for on-site acquisition of
large scale 3D models, capturing the appearance of the materials that compose
those models remains an open problem in general uncontrolled environments.
The appearance of a material is indeed a complex function of its geometry,
intrinsic physical properties and furthermore depends on the illumination conditions
in which it is observed, thus traditionally limiting the scope of reflectometry
to highly controlled lighting conditions in a laboratory setup. With the rapid development
of digital photography, especially on mobile devices, a new trend in the
appearance modelling community has emerged, that investigates novel acquisition
methods and algorithms to relax the hard constraints imposed by laboratory-like
setups, for easy use by digital artists. While arguably not as accurate, we demonstrate
the ability of such self-contained methods to enable quick and easy solutions
for on-site reflectometry, able to produce compelling, photo-realistic imagery.
In particular, this dissertation investigates novel methods for on-site acquisition
of surface reflectance based on off-the-shelf, commodity hardware. We successfully
demonstrate how a mobile device can be utilised to capture high quality
reflectance maps of spatially-varying planar surfaces in general indoor lighting
conditions. We further present a novel methodology for the acquisition of highly
detailed reflectance maps of permanent on-site, outdoor surfaces by exploiting
polarisation from reflection under natural illumination.
We demonstrate the versatility of the presented approaches by scanning various
surfaces from the real world and show good qualitative and quantitative agreement
with existing methods for appearance acquisition employing controlled or
semi-controlled illumination setups.Open Acces
Embedded polarizing filters to separate diffuse and specular reflection
Polarizing filters provide a powerful way to separate diffuse and specular
reflection; however, traditional methods rely on several captures and require
proper alignment of the filters. Recently, camera manufacturers have proposed
to embed polarizing micro-filters in front of the sensor, creating a mosaic of
pixels with different polarizations. In this paper, we investigate the
advantages of such camera designs. In particular, we consider different design
patterns for the filter arrays and propose an algorithm to demosaic an image
generated by such cameras. This essentially allows us to separate the diffuse
and specular components using a single image. The performance of our algorithm
is compared with a color-based method using synthetic and real data. Finally,
we demonstrate how we can recover the normals of a scene using the diffuse
images estimated by our method.Comment: ACCV 201
- …