6 research outputs found

    Polarimetric Multi-View Inverse Rendering

    Full text link
    A polarization camera has great potential for 3D reconstruction since the angle of polarization (AoP) of reflected light is related to an object's surface normal. In this paper, we propose a novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that effectively exploits geometric, photometric, and polarimetric cues extracted from input multi-view color polarization images. We first estimate camera poses and an initial 3D model by geometric reconstruction with a standard structure-from-motion and multi-view stereo pipeline. We then refine the initial model by optimizing photometric and polarimetric rendering errors using multi-view RGB and AoP images, where we propose a novel polarimetric rendering cost function that enables us to effectively constrain each estimated surface vertex's normal while considering four possible ambiguous azimuth angles revealed from the AoP measurement. Experimental results using both synthetic and real data demonstrate that our Polarimetric MVIR can reconstruct a detailed 3D shape without assuming a specific polarized reflection depending on the material.Comment: Paper accepted in ECCV 202

    Polarimetric Multi-View Inverse Rendering

    Full text link
    A polarization camera has great potential for 3D reconstruction since the angle of polarization (AoP) and the degree of polarization (DoP) of reflected light are related to an object's surface normal. In this paper, we propose a novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that effectively exploits geometric, photometric, and polarimetric cues extracted from input multi-view color-polarization images. We first estimate camera poses and an initial 3D model by geometric reconstruction with a standard structure-from-motion and multi-view stereo pipeline. We then refine the initial model by optimizing photometric rendering errors and polarimetric errors using multi-view RGB, AoP, and DoP images, where we propose a novel polarimetric cost function that enables an effective constraint on the estimated surface normal of each vertex, while considering four possible ambiguous azimuth angles revealed from the AoP measurement. The weight for the polarimetric cost is effectively determined based on the DoP measurement, which is regarded as the reliability of polarimetric information. Experimental results using both synthetic and real data demonstrate that our Polarimetric MVIR can reconstruct a detailed 3D shape without assuming a specific surface material and lighting condition.Comment: Paper accepted in IEEE Transactions on Pattern Analysis and Machine Intelligence (2022). arXiv admin note: substantial text overlap with arXiv:2007.0883

    Uncalibrated, Two Source Photo-Polarimetric Stereo

    Get PDF
    none5siAvailable online: 6 May 2021.In this paper we present methods for estimating shape from polarisation and shading information, i.e. photo-polarimetric shape estimation, under varying, but unknown, illumination, i.e. in an uncalibrated scenario. We propose several alternative photo-polarimetric constraints that depend upon the partial derivatives of the surface and show how to express them in a unified system of partial differential equations of which previous work is a special case. By careful combination and manipulation of the constraints, we show how to eliminate non-linearities such that a discrete version of the problem can be solved using linear least squares. We derive a minimal, combinatorial approach for two source illumination estimation which we use with RANSAC for robust light direction and intensity estimation. We also introduce a new method for estimating a polarisation image from multichannel data and provide methods for estimating albedo and refractive index. We evaluate lighting, shape, albedo and refractive index estimation methods on both synthetic and real-world data showing improvements over existing state-of-the-art.noneTozza, Silvia; Zhu, Dizhong; Smith, William; Ramamoorthi, Ravi; Hancock, EdwinTozza, Silvia; Zhu, Dizhong; Smith, William; Ramamoorthi, Ravi; Hancock, Edwi

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches

    3D shape reconstruction using a polarisation reflectance model in conjunction with shading and stereo

    Get PDF
    Reconstructing the 3D geometry of objects from images is a fundamental problem in computer vision. This thesis focuses on shape from polarisation where the goal is to reconstruct a dense depth map from a sequence of polarisation images. Firstly, we propose a linear differential constraints approach to depth estimation from polarisation images. We demonstrate that colour images can deliver more robust polarimetric measurements compared to monochrome images. Then we explore different constraints by taking the polarisation images under two different light conditions with fixed view and show that a dense depth map, albedo map and refractive index can be recovered. Secondly, we propose a nonlinear method to reconstruct depth by an end-to-end method. We re-parameterise a polarisation reflectance model with respect to the depth map, and predict an optimum depth map by minimising an energy cost function between the prediction from the reflectance model and observed data using nonlinear least squares. Thirdly, we propose to enhance the polarisation camera with an additional RGB camera in a second view. We construct a higher-order graphical model by utilising an initial rough depth map estimated from the stereo views. The graphical model will correct the surface normal ambiguity which arises from the polarisation reflectance model. We then build a linear system to combine the corrected surface normal, polarimetric information and rough depth map to produce an accurate and dense depth map. Lastly, we derive a mixed polarisation model that describes specular and diffuse polarisation as well as mixtures of the two. This model is more physically accurate and allows us to decompose specular and diffuse reflectance from multiview images
    corecore