185 research outputs found
Single-image RGB Photometric Stereo With Spatially-varying Albedo
We present a single-shot system to recover surface geometry of objects with
spatially-varying albedos, from images captured under a calibrated RGB
photometric stereo setup---with three light directions multiplexed across
different color channels in the observed RGB image. Since the problem is
ill-posed point-wise, we assume that the albedo map can be modeled as
piece-wise constant with a restricted number of distinct albedo values. We show
that under ideal conditions, the shape of a non-degenerate local constant
albedo surface patch can theoretically be recovered exactly. Moreover, we
present a practical and efficient algorithm that uses this model to robustly
recover shape from real images. Our method first reasons about shape locally in
a dense set of patches in the observed image, producing shape distributions for
every patch. These local distributions are then combined to produce a single
consistent surface normal map. We demonstrate the efficacy of the approach
through experiments on both synthetic renderings as well as real captured
images.Comment: 3DV 2016. Project page at http://www.ttic.edu/chakrabarti/rgbps
NeuralMPS: Non-Lambertian Multispectral Photometric Stereo via Spectral Reflectance Decomposition
Multispectral photometric stereo(MPS) aims at recovering the surface normal
of a scene from a single-shot multispectral image captured under multispectral
illuminations. Existing MPS methods adopt the Lambertian reflectance model to
make the problem tractable, but it greatly limits their application to
real-world surfaces. In this paper, we propose a deep neural network named
NeuralMPS to solve the MPS problem under general non-Lambertian spectral
reflectances. Specifically, we present a spectral reflectance
decomposition(SRD) model to disentangle the spectral reflectance into geometric
components and spectral components. With this decomposition, we show that the
MPS problem for surfaces with a uniform material is equivalent to the
conventional photometric stereo(CPS) with unknown light intensities. In this
way, NeuralMPS reduces the difficulty of the non-Lambertian MPS problem by
leveraging the well-studied non-Lambertian CPS methods. Experiments on both
synthetic and real-world scenes demonstrate the effectiveness of our method
Colour Helmholtz Stereopsis for Reconstruction of Complex Dynamic Scenes
Helmholtz Stereopsis (HS) is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. two images with the camera and light source positions mutually interchanged). In this paper, we propose colour HS-a novel variant of the technique based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed novel pipeline for colour HS uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for surface chromaticity calibration and the state-of-the-art Bayesian HS suitable for reconstruction from a minimal number of reciprocal pairs. Experimental results including quantitative and qualitative evaluation demonstrate that the method is suitable for flexible (single-shot) reconstruction of static scenes and reconstruction of dynamic scenes with complex surface reflectance properties
Multi-spectral Material Classification in Landscape Scenes Using Commodity Hardware
We investigate the advantages of a stereo, multi-spectral
acquisition system for material classication in ground-level landscape
images. Our novel system allows us to acquire high-resolution, multi-
spectral stereo pairs using commodity photographic equipment. Given
additional spectral information we obtain better classication of vege-
tation classes than the standard RGB case. We test the system in two
modes: splitting the visible spectrum into six bands; and extending the
recorded spectrum to near infra-red. Our six-band design is more prac-
tical than standard multi-spectral techniques and foliage classication
using acquired images compares favourably to simply using a standard
camera
In vivo measurement of skin microrelief using photometricstereo in the presence of interreflections
This paper proposes and describes an implementation of a novel photometric stereo based technique for in vivo assessment of three-dimensional (3D) skin topographyin the presence of interreflections. The proposed method illuminates skin with red, green, and blue colored lights and uses the resulting variation in surface gradients tomitigate the effects of interreflections. Experiments were carried out on Caucasian, Asian and African American subjects to demonstrate the accuracy of our methodand to validate the measurements produced by our system. Our method produced significant improvement in 3D surface reconstruction for all Caucasian, Asian and African American skin types. The results also illustrate the differences in recovered skin topography due to non-diffuse Bidirectional reflectance distribution function(BRDF) for each color illumination used, which also concur with the existing multispectral BRDF data available for skin
Differentiable Display Photometric Stereo
Photometric stereo leverages variations in illumination conditions to
reconstruct per-pixel surface normals. The concept of display photometric
stereo, which employs a conventional monitor as an illumination source, has the
potential to overcome limitations often encountered in bulky and
difficult-to-use conventional setups. In this paper, we introduce
Differentiable Display Photometric Stereo (DDPS), a method designed to achieve
high-fidelity normal reconstruction using an off-the-shelf monitor and camera.
DDPS addresses a critical yet often neglected challenge in photometric stereo:
the optimization of display patterns for enhanced normal reconstruction. We
present a differentiable framework that couples basis-illumination image
formation with a photometric-stereo reconstruction method. This facilitates the
learning of display patterns that leads to high-quality normal reconstruction
through automatic differentiation. Addressing the synthetic-real domain gap
inherent in end-to-end optimization, we propose the use of a real-world
photometric-stereo training dataset composed of 3D-printed objects. Moreover,
to reduce the ill-posed nature of photometric stereo, we exploit the linearly
polarized light emitted from the monitor to optically separate diffuse and
specular reflections in the captured images. We demonstrate that DDPS allows
for learning display patterns optimized for a target configuration and is
robust to initialization. We assess DDPS on 3D-printed objects with
ground-truth normals and diverse real-world objects, validating that DDPS
enables effective photometric-stereo reconstruction
Refractive shape from light field distortion
Acquiring transparent, refractive objects is challenging as these kinds of objects can only be observed by analyzing the distortion of reference background patterns. We present a new, single image approach to reconstructing thin transparent surfaces, such as thin solids or surfaces of fluids. Our method is based on observing the distortion of light field background illumination. Light field probes have the potential to encode up to four dimensions in varying colors and intensities: spatial and angular variation on the probe surface; commonly employed reference patterns are only two-dimensional by coding either position or angle on the probe. We show that the additional information can be used to reconstruct refractive surface normals and a sparse set of control points from a single photograph
Physics vs. Learned Priors: Rethinking Camera and Algorithm Design for Task-Specific Imaging
Cameras were originally designed using physics-based heuristics to capture
aesthetic images. In recent years, there has been a transformation in camera
design from being purely physics-driven to increasingly data-driven and
task-specific. In this paper, we present a framework to understand the building
blocks of this nascent field of end-to-end design of camera hardware and
algorithms. As part of this framework, we show how methods that exploit both
physics and data have become prevalent in imaging and computer vision,
underscoring a key trend that will continue to dominate the future of
task-specific camera design. Finally, we share current barriers to progress in
end-to-end design, and hypothesize how these barriers can be overcome
- …