804 research outputs found

    Photometric Depth Super-Resolution

    Full text link
    This study explores the use of photometric techniques (shape-from-shading and uncalibrated photometric stereo) for upsampling the low-resolution depth map from an RGB-D sensor to the higher resolution of the companion RGB image. A single-shot variational approach is first put forward, which is effective as long as the target's reflectance is piecewise-constant. It is then shown that this dependency upon a specific reflectance model can be relaxed by focusing on a specific class of objects (e.g., faces), and delegate reflectance estimation to a deep neural network. A multi-shot strategy based on randomly varying lighting conditions is eventually discussed. It requires no training or prior on the reflectance, yet this comes at the price of a dedicated acquisition setup. Both quantitative and qualitative evaluations illustrate the effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2019. First three authors contribute equall

    Height from Photometric Ratio with Model-based Light Source Selection

    Get PDF
    In this paper, we present a photometric stereo algorithm for estimating surface height. We follow recent work that uses photometric ratios to obtain a linear formulation relating surface gradients and image intensity. Using smoothed finite difference approximations for the surface gradient, we are able to express surface height recovery as a linear least squares problem that is large but sparse. In order to make the method practically useful, we combine it with a model-based approach that excludes observations which deviate from the assumptions made by the image formation model. Despite its simplicity, we show that our algorithm provides surface height estimates of a high quality even for objects with highly non-Lambertian appearance. We evaluate the method on both synthetic images with ground truth and challenging real images that contain strong specular reflections and cast shadows

    Ear-to-ear Capture of Facial Intrinsics

    Get PDF
    We present a practical approach to capturing ear-to-ear face models comprising both 3D meshes and intrinsic textures (i.e. diffuse and specular albedo). Our approach is a hybrid of geometric and photometric methods and requires no geometric calibration. Photometric measurements made in a lightstage are used to estimate view dependent high resolution normal maps. We overcome the problem of having a single photometric viewpoint by capturing in multiple poses. We use uncalibrated multiview stereo to estimate a coarse base mesh to which the photometric views are registered. We propose a novel approach to robustly stitching surface normal and intrinsic texture data into a seamless, complete and highly detailed face model. The resulting relightable models provide photorealistic renderings in any view

    Practical SVBRDF Acquisition of 3D Objects with Unstructured Flash Photography

    Get PDF
    Capturing spatially-varying bidirectional reflectance distribution functions (SVBRDFs) of 3D objects with just a single, hand-held camera (such as an off-the-shelf smartphone or a DSLR camera) is a difficult, open problem. Previous works are either limited to planar geometry, or rely on previously scanned 3D geometry, thus limiting their practicality. There are several technical challenges that need to be overcome: First, the built-in flash of a camera is almost colocated with the lens, and at a fixed position; this severely hampers sampling procedures in the light-view space. Moreover, the near-field flash lights the object partially and unevenly. In terms of geometry, existing multiview stereo techniques assume diffuse reflectance only, which leads to overly smoothed 3D reconstructions, as we show in this paper. We present a simple yet powerful framework that removes the need for expensive, dedicated hardware, enabling practical acquisition of SVBRDF information from real-world, 3D objects with a single, off-the-shelf camera with a built-in flash. In addition, by removing the diffuse reflection assumption and leveraging instead such SVBRDF information, our method outputs high-quality 3D geometry reconstructions, including more accurate high-frequency details than state-of-the-art multiview stereo techniques. We formulate the joint reconstruction of SVBRDFs, shading normals, and 3D geometry as a multi-stage, iterative inverse-rendering reconstruction pipeline. Our method is also directly applicable to any existing multiview 3D reconstruction technique. We present results of captured objects with complex geometry and reflectance; we also validate our method numerically against other existing approaches that rely on dedicated hardware, additional sources of information, or both

    3D Reconstruction using Active Illumination

    Get PDF
    In this thesis we present a pipeline for 3D model acquisition. Generating 3D models of real-world objects is an important task in computer vision with many applications, such as in 3D design, archaeology, entertainment, and virtual or augmented reality. The contribution of this thesis is threefold: we propose a calibration procedure for the cameras, we describe an approach for capturing and processing photometric normals using gradient illuminations in the hardware set-up, and finally we present a multi-view photometric stereo 3D reconstruction method. In order to obtain accurate results using multi-view and photometric stereo reconstruction, the cameras are calibrated geometrically and photometrically. For acquiring data, a light stage is used. This is a hardware set-up that allows to control the illumination during acquisition. The procedure used to generate appropriate illuminations and to process the acquired data to obtain accurate photometric normals is described. The core of the pipeline is a multi-view photometric stereo reconstruction method. In this method, we first generate a sparse reconstruction using the acquired images and computed normals. In the second step, the information from the normal maps is used to obtain a dense reconstruction of an object’s surface. Finally, the reconstructed surface is filtered to remove artifacts introduced by the dense reconstruction step

    Color image-based shape reconstruction of multi-color objects under general illumination conditions

    Get PDF
    Humans have the ability to infer the surface reflectance properties and three-dimensional shape of objects from two-dimensional photographs under simple and complex illumination fields. Unfortunately, the reported algorithms in the area of shape reconstruction require a number of simplifying assumptions that result in poor performance in uncontrolled imaging environments. Of all these simplifications, the assumptions of non-constant surface reflectance, globally consistent illumination, and multiple surface views are the most likely to be contradicted in typical environments. In this dissertation, three automatic algorithms for the recovery of surface shape given non-constant reflectance using a single-color image acquired are presented. In addition, a novel method for the identification and removal of shadows from simple scenes is discussed.In existing shape reconstruction algorithms for surfaces of constant reflectance, constraints based on the assumed smoothness of the objects are not explicitly used. Through Explicit incorporation of surface smoothness properties, the algorithms presented in this work are able to overcome the limitations of the previously reported algorithms and accurately estimate shape in the presence of varying reflectance. The three techniques developed for recovering the shape of multi-color surfaces differ in the method through which they exploit the surface smoothness property. They are summarized below:• Surface Recovery using Pre-Segmentation - this algorithm pre-segments the image into distinct color regions and employs smoothness constraints at the color-change boundaries to constrain and recover surface shape. This technique is computationally efficient and works well for images with distinct color regions, but does not perform well in the presence of high-frequency color textures that are difficult to segment.iv• Surface Recovery via Normal Propagation - this approach utilizes local gradient information to propagate a smooth surface solution from points of known orientation. While solution propagation eliminates the need for color-based image segmentation, the quality of the recovered surface can be degraded by high degrees of image noise due to reliance on local information.• Surface Recovery by Global Variational Optimization - this algorithm utilizes a normal gradient smoothness constraint in a non-linear optimization strategy, to iteratively solve for the globally optimal object surface. Because of its global nature, this approach is much less sensitive to noise than the normal propagation is, but requires significantly more computational resources.Results acquired through application of the above algorithms to various synthetic and real image data sets are presented for qualitative evaluation. A quantitative analysis of the algorithms is also discussed for quadratic shapes. The robustness of the three approaches to factors such as segmentation error and random image noise is also explored
    • …
    corecore