49 research outputs found

    Photometric stereo and appearance capture

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Photometric Reconstruction from Images: New Scenarios and Approaches for Uncontrolled Input Data

    Get PDF
    The changes in surface shading caused by varying illumination constitute an important cue to discern fine details and recognize the shape of textureless objects. Humans perform this task subconsciously, but it is challenging for a computer because several variables are unknown and intermix in the light distribution that actually reaches the eye or camera. In this work, we study algorithms and techniques to automatically recover the surface orientation and reflectance properties from multiple images of a scene. Photometric reconstruction techniques have been investigated for decades but are still restricted to industrial applications and research laboratories. Making these techniques work on more general, uncontrolled input without specialized capture setups has to be the next step but is not yet solved. We explore the current limits of photometric shape recovery in terms of input data and propose ways to overcome some of its restrictions. Many approaches, especially for non-Lambertian surfaces, rely on the illumination and the radiometric response function of the camera to be known. The accuracy such algorithms are able to achieve depends a lot on the quality of an a priori calibration of these parameters. We propose two techniques to estimate the position of a point light source, experimentally compare their performance with the commonly employed method, and draw conclusions which one to use in practice. We also discuss how well an absolute radiometric calibration can be performed on uncontrolled consumer images and show the application of a simple radiometric model to re-create night-time impressions from color images. A focus of this thesis is on Internet images which are an increasingly important source of data for computer vision and graphics applications. Concerning reconstructions in this setting we present novel approaches that are able to recover surface orientation from Internet webcam images. We explore two different strategies to overcome the challenges posed by this kind of input data. One technique exploits orientation consistency and matches appearance profiles on the target with a partial reconstruction of the scene. This avoids an explicit light calibration and works for any reflectance that is observed on the partial reference geometry. The other technique employs an outdoor lighting model and reflectance properties represented as parametric basis materials. It yields a richer scene representation consisting of shape and reflectance. This is very useful for the simulation of new impressions or editing operations, e.g. relighting. The proposed approach is the first that achieves such a reconstruction on webcam data. Both presentations are accompanied by evaluations on synthetic and real-world data showing qualitative and quantitative results. We also present a reconstruction approach for more controlled data in terms of the target scene. It relies on a reference object to relax a constraint common to many photometric stereo approaches: the fixed camera assumption. The proposed technique allows the camera and light source to vary freely in each image. It again avoids a light calibration step and can be applied to non-Lambertian surfaces. In summary, this thesis contributes to the calibration and to the reconstruction aspects of photometric techniques. We overcome challenges in both controlled and uncontrolled settings, with a focus on the latter. All proposed approaches are shown to operate also on non-Lambertian objects

    Photometric stereo with applications in material classification

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    High-resolution imaging for e-heritage

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    TOWARDS CASUAL APPEARANCE CAPTURE BY REFLECTANCE SYMMETRY

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Resolving Ambiguities in Monocular 3D Reconstruction of Deformable Surfaces

    Get PDF
    In this thesis, we focus on the problem of recovering 3D shapes of deformable surfaces from a single camera. This problem is known to be ill-posed as for a given 2D input image there exist many 3D shapes that give visually identical projections. We present three methods which make headway towards resolving these ambiguities. We believe that our work represents a significant step towards making surface reconstruction methods of practical use. First, we propose a surface reconstruction method that overcomes the limitations of the state-of-the-art template-based and non-rigid structure from motion methods. We neither track points over many frames, nor require a sophisticated deformation model, or depend on a reference image. In our method, we establish correspondences between pairs of frames in which the shape is different and unknown. We then estimate homographies between corresponding local planar patches in both images. These yield approximate 3D reconstructions of points within each patch up to a scale factor. Since we consider overlapping patches, we can enforce them to be consistent over the whole surface. Finally, a local deformation model is used to fit a triangulated mesh to the 3D point cloud, which makes the reconstruction robust to both noise and outliers in the image data. Second, we propose a novel approach to recovering the 3D shape of a deformable surface from a monocular input by taking advantage of shading information in more generic contexts than conventional Shape-from-Shading (SfS) methods. This includes surfaces that may be fully or partially textured and lit by arbitrarily many light sources. To this end, given a lighting model, we learn the relationship between a shading pattern and the corresponding local surface shape. At run time, we first use this knowledge to recover the shape of surface patches and then enforce spatial consistency between the patches to produce a global 3D shape. Instead of treating texture as noise as in many SfS approaches, we exploit it as an additional source of information. We validate our approach quantitatively and qualitatively using both synthetic and real data. Third, we introduce a constrained latent variable model that inherently accounts for geometric constraints such as inextensibility defined on the mesh model. To this end, we learn a non-linear mapping from the latent space to the output space, which corresponds to vertex positions of a mesh model, such that the generated outputs comply with equality and inequality constraints expressed in terms of the problem variables. Since its output is encouraged to satisfy such constraints inherently, using our model removes the need for computationally expensive methods that enforce these constraints at run time. In addition, our approach is completely generic and could be used in many other different contexts as well, such as image classification to impose separation of the classes, and articulated tracking to constrain the space of possible poses

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches
    corecore