653 research outputs found

    New constraints on data-closeness and needle map consistency for shape-from-shading

    Get PDF
    This paper makes two contributions to the problem of needle-map recovery using shape-from-shading. First, we provide a geometric update procedure which allows the image irradiance equation to be satisfied as a hard constraint. This not only improves the data closeness of the recovered needle-map, but also removes the necessity for extensive parameter tuning. Second, we exploit the improved ease of control of the new shape-from-shading process to investigate various types of needle-map consistency constraint. The first set of constraints are based on needle-map smoothness. The second avenue of investigation is to use curvature information to impose topographic constraints. Third, we explore ways in which the needle-map is recovered so as to be consistent with the image gradient field. In each case we explore a variety of robust error measures and consistency weighting schemes that can be used to impose the desired constraints on the recovered needle-map. We provide an experimental assessment of the new shape-from-shading framework on both real world images and synthetic images with known ground truth surface normals. The main conclusion drawn from our analysis is that the data-closeness constraint improves the efficiency of shape-from-shading and that both the topographic and gradient consistency constraints improve the fidelity of the recovered needle-map

    SfSNet: Learning Shape, Reflectance and Illuminance of Faces in the Wild

    Full text link
    We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained human face image into shape, reflectance and illuminance. SfSNet is designed to reflect a physical lambertian rendering model. SfSNet learns from a mixture of labeled synthetic and unlabeled real world images. This allows the network to capture low frequency variations from synthetic and high frequency details from real images through the photometric reconstruction loss. SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation.Comment: Accepted to CVPR 2018 (Spotlight

    Photometric Depth Super-Resolution

    Full text link
    This study explores the use of photometric techniques (shape-from-shading and uncalibrated photometric stereo) for upsampling the low-resolution depth map from an RGB-D sensor to the higher resolution of the companion RGB image. A single-shot variational approach is first put forward, which is effective as long as the target's reflectance is piecewise-constant. It is then shown that this dependency upon a specific reflectance model can be relaxed by focusing on a specific class of objects (e.g., faces), and delegate reflectance estimation to a deep neural network. A multi-shot strategy based on randomly varying lighting conditions is eventually discussed. It requires no training or prior on the reflectance, yet this comes at the price of a dedicated acquisition setup. Both quantitative and qualitative evaluations illustrate the effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2019. First three authors contribute equall

    A mathematical and algorithmic study of the Lambertian SFS problem for orthographic and pinhole cameras

    Get PDF
    This report proposes a mathematical and algorithmic study of the Lambertian SFS problem for orthographic and pinhole cameras. Our approach is based upon the notion of viscosity solutions of Hamilton-Jacobi equations. This approach provides a mathematical framework in which we can prove the well-posedness of the problem (proof of the existence of a solution and characterization of all solutions). This mathematical approach allows also to prove the correctness of our methods. In particular, we describe a simple monotonous stability condition for the studied decentered schemes and we prove the convergence of their solutions toward the viscosity solution of the associated Hamilton-Jacobi-Bellman equation. Also, we show that this theory naturally applies to the SFS problems. Our work extends previous work in the SFS area in three directions. First, it models the camera both as orthographic and as perspective (pinhole), i.e whereas most authors assume an orthographic projection (see for a panorama of the SFS problem up to 1989 and for more recent surveys); thus we extend the applicability of shape from shading methods to more realistic acquisition models. In particular it extends the work of and . Also, by introducing a «generic» Hamiltonian, we work in a general framework allowing to deal with both models, thereby simplifying the formalization of the problem. Second, it gives some novel mathematical formulations of this problem yielding new partial differential equations. Results about the existence and uniqueness of their solution are also obtained. Third, it allows us to come up with two new generic algorithms for computing numerical approximations of the "continuous" solution (of the «generic SFS problem») as well as a proof of their convergence toward that solution. Moreover, our two generic algorithms are able to deal with discontinuous images as well as images containing black shadows. Also, one of the algorithms we propose in this report, seems to be the most effective iterative algorithm of the SFS literature.    From a more general viewpoint, our numerical results follow from a new method for solving Hamilton-Jacobi-Bellman equations. We propose two decentered finite difference schemes. We detail the proofs of the stability and the consistency of these schemes, and the proof of the convergence of their associated algorithms

    CNN-based Real-time Dense Face Reconstruction with Inverse-rendered Photo-realistic Face Images

    Full text link
    With the powerfulness of convolution neural networks (CNN), CNN based face reconstruction has recently shown promising performance in reconstructing detailed face shape from 2D face images. The success of CNN-based methods relies on a large number of labeled data. The state-of-the-art synthesizes such data using a coarse morphable face model, which however has difficulty to generate detailed photo-realistic images of faces (with wrinkles). This paper presents a novel face data generation method. Specifically, we render a large number of photo-realistic face images with different attributes based on inverse rendering. Furthermore, we construct a fine-detailed face image dataset by transferring different scales of details from one image to another. We also construct a large number of video-type adjacent frame pairs by simulating the distribution of real video data. With these nicely constructed datasets, we propose a coarse-to-fine learning framework consisting of three convolutional networks. The networks are trained for real-time detailed 3D face reconstruction from monocular video as well as from a single image. Extensive experimental results demonstrate that our framework can produce high-quality reconstruction but with much less computation time compared to the state-of-the-art. Moreover, our method is robust to pose, expression and lighting due to the diversity of data.Comment: Accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence, 201

    3D Face Reconstruction from Light Field Images: A Model-free Approach

    Full text link
    Reconstructing 3D facial geometry from a single RGB image has recently instigated wide research interest. However, it is still an ill-posed problem and most methods rely on prior models hence undermining the accuracy of the recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI) obtained from light field cameras and learn CNN models that recover horizontal and vertical 3D facial curves from the respective horizontal and vertical EPIs. Our 3D face reconstruction network (FaceLFnet) comprises a densely connected architecture to learn accurate 3D facial curves from low resolution EPIs. To train the proposed FaceLFnets from scratch, we synthesize photo-realistic light field images from 3D facial scans. The curve by curve 3D face estimation approach allows the networks to learn from only 14K images of 80 identities, which still comprises over 11 Million EPIs/curves. The estimated facial curves are merged into a single pointcloud to which a surface is fitted to get the final 3D face. Our method is model-free, requires only a few training samples to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single light field images under varying poses, expressions and lighting conditions. Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces reconstruction errors by over 20% compared to recent state of the art
    • …
    corecore