92 research outputs found
Ear-to-ear Capture of Facial Intrinsics
We present a practical approach to capturing ear-to-ear face models
comprising both 3D meshes and intrinsic textures (i.e. diffuse and specular
albedo). Our approach is a hybrid of geometric and photometric methods and
requires no geometric calibration. Photometric measurements made in a
lightstage are used to estimate view dependent high resolution normal maps. We
overcome the problem of having a single photometric viewpoint by capturing in
multiple poses. We use uncalibrated multiview stereo to estimate a coarse base
mesh to which the photometric views are registered. We propose a novel approach
to robustly stitching surface normal and intrinsic texture data into a
seamless, complete and highly detailed face model. The resulting relightable
models provide photorealistic renderings in any view
Polarimetric Multi-View Inverse Rendering
A polarization camera has great potential for 3D reconstruction since the
angle of polarization (AoP) of reflected light is related to an object's
surface normal. In this paper, we propose a novel 3D reconstruction method
called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that
effectively exploits geometric, photometric, and polarimetric cues extracted
from input multi-view color polarization images. We first estimate camera poses
and an initial 3D model by geometric reconstruction with a standard
structure-from-motion and multi-view stereo pipeline. We then refine the
initial model by optimizing photometric and polarimetric rendering errors using
multi-view RGB and AoP images, where we propose a novel polarimetric rendering
cost function that enables us to effectively constrain each estimated surface
vertex's normal while considering four possible ambiguous azimuth angles
revealed from the AoP measurement. Experimental results using both synthetic
and real data demonstrate that our Polarimetric MVIR can reconstruct a detailed
3D shape without assuming a specific polarized reflection depending on the
material.Comment: Paper accepted in ECCV 202
Polarimetric Multi-View Inverse Rendering
A polarization camera has great potential for 3D reconstruction since the
angle of polarization (AoP) and the degree of polarization (DoP) of reflected
light are related to an object's surface normal. In this paper, we propose a
novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering
(Polarimetric MVIR) that effectively exploits geometric, photometric, and
polarimetric cues extracted from input multi-view color-polarization images. We
first estimate camera poses and an initial 3D model by geometric reconstruction
with a standard structure-from-motion and multi-view stereo pipeline. We then
refine the initial model by optimizing photometric rendering errors and
polarimetric errors using multi-view RGB, AoP, and DoP images, where we propose
a novel polarimetric cost function that enables an effective constraint on the
estimated surface normal of each vertex, while considering four possible
ambiguous azimuth angles revealed from the AoP measurement. The weight for the
polarimetric cost is effectively determined based on the DoP measurement, which
is regarded as the reliability of polarimetric information. Experimental
results using both synthetic and real data demonstrate that our Polarimetric
MVIR can reconstruct a detailed 3D shape without assuming a specific surface
material and lighting condition.Comment: Paper accepted in IEEE Transactions on Pattern Analysis and Machine
Intelligence (2022). arXiv admin note: substantial text overlap with
arXiv:2007.0883
INVESTIGATING 3D RECONSTRUCTION OF NON-COLLABORATIVE SURFACES THROUGH PHOTOGRAMMETRY AND PHOTOMETRIC STEREO
Abstract. 3D digital reconstruction techniques are extensively used for quality control purposes. Among them, photogrammetry and photometric stereo methods have been for a long time used with success in several application fields. However, generating highly-detailed and reliable micro-measurements of non-collaborative surfaces is still an open issue. In these cases, photogrammetry can provide accurate low-frequency 3D information, whereas it struggles to extract reliable high-frequency details. Conversely, photometric stereo can recover a very detailed surface topography, although global surface deformation is often present. In this paper, we present the preliminary results of an ongoing project aiming to combine photogrammetry and photometric stereo in a synergetic fusion of the two techniques. Particularly, hereafter, we introduce the main concept design behind an image acquisition system we developed to capture images from different positions and under different lighting conditions as required by photogrammetry and photometric stereo techniques. We show the benefit of such a combination through some experimental tests. The experiments showed that the proposed method recovers the surface topography at the same high-resolution achievable with photometric stereo while preserving the photogrammetric accuracy. Furthermore, we exploit light directionality and multiple light sources to improve the quality of dense image matching in poorly textured surfaces
Photo-Realistic Facial Details Synthesis from Single Image
We present a single-image 3D face synthesis technique that can handle
challenging facial expressions while recovering fine geometric details. Our
technique employs expression analysis for proxy face geometry generation and
combines supervised and unsupervised learning for facial detail synthesis. On
proxy generation, we conduct emotion prediction to determine a new
expression-informed proxy. On detail synthesis, we present a Deep Facial Detail
Net (DFDN) based on Conditional Generative Adversarial Net (CGAN) that employs
both geometry and appearance loss functions. For geometry, we capture 366
high-quality 3D scans from 122 different subjects under 3 facial expressions.
For appearance, we use additional 20K in-the-wild face images and apply
image-based rendering to accommodate lighting variations. Comprehensive
experiments demonstrate that our framework can produce high-quality 3D faces
with realistic details under challenging facial expressions
Polarized 3D: High-Quality Depth Sensing with Polarization Cues
Coarse depth maps can be enhanced by using the shape information from polarization cues. We propose a framework to combine surface normals from polarization (hereafter polarization normals) with an aligned depth map. Polarization normals have not been used for depth enhancement before. This is because polarization normals suffer from physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We propose a framework to overcome these key challenges, allowing the benefits of polarization to be used to enhance depth maps. Our results demonstrate improvement with respect to state-of-the-art 3D reconstruction techniques.Charles Stark Draper Laboratory (Doctoral Fellowship)Singapore. Ministry of Education (Academic Research Foundation MOE2013-T2-1-159)Singapore. National Research Foundation (Singapore University of Technology and Design
Photometric Depth Super-Resolution
This study explores the use of photometric techniques (shape-from-shading and
uncalibrated photometric stereo) for upsampling the low-resolution depth map
from an RGB-D sensor to the higher resolution of the companion RGB image. A
single-shot variational approach is first put forward, which is effective as
long as the target's reflectance is piecewise-constant. It is then shown that
this dependency upon a specific reflectance model can be relaxed by focusing on
a specific class of objects (e.g., faces), and delegate reflectance estimation
to a deep neural network. A multi-shot strategy based on randomly varying
lighting conditions is eventually discussed. It requires no training or prior
on the reflectance, yet this comes at the price of a dedicated acquisition
setup. Both quantitative and qualitative evaluations illustrate the
effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence
(T-PAMI), 2019. First three authors contribute equall
An improved photometric stereo through distance estimation and light vector optimization from diffused maxima region
© 2013 Elsevier B.V. All rights reserved. Although photometric stereo offers an attractive technique for acquiring 3D data using low-cost equipment, inherent limitations in the methodology have served to limit its practical application, particularly in measurement or metrology tasks. Here we address this issue. Traditional photometric stereo assumes that lighting directions at every pixel are the same, which is not usually the case in real applications, and especially where the size of object being observed is comparable to the working distance. Such imperfections of the illumination may make the subsequent reconstruction procedures used to obtain the 3D shape of the scene prone to low frequency geometric distortion and systematic error (bias). Also, the 3D reconstruction of the object results in a geometric shape with an unknown scale. To overcome these problems a novel method of estimating the distance of the object from the camera is developed, which employs photometric stereo images without using other additional imaging modality. The method firstly identifies Lambertian diffused maxima region to calculate the object distance from the camera, from which the corrected per-pixel light vector is able to be derived and the absolute dimensions of the object can be subsequently estimated. We also propose a new calibration process to allow a dynamic(as an object moves in the field of view) calculation of light vectors for each pixel with little additional computation cost. Experiments performed on synthetic as well as real data demonstrates that the proposed approach offers improved performance, achieving a reduction in the estimated surface normal error of up to 45% as well as mean height error of reconstructed surface of up to 6 mm. In addition, when compared to traditional photometric stereo, the proposed method reduces the mean angular and height error so that it is low, constant and independent of the position of the object placement within a normal working range
- …