299 research outputs found

    Linear Differential Constraints for Photo-polarimetric Height Estimation

    Full text link
    In this paper we present a differential approach to photo-polarimetric shape estimation. We propose several alternative differential constraints based on polarisation and photometric shading information and show how to express them in a unified partial differential system. Our method uses the image ratios technique to combine shading and polarisation information in order to directly reconstruct surface height, without first computing surface normal vectors. Moreover, we are able to remove the non-linearities so that the problem reduces to solving a linear differential problem. We also introduce a new method for estimating a polarisation image from multichannel data and, finally, we show it is possible to estimate the illumination directions in a two source setup, extending the method into an uncalibrated scenario. From a numerical point of view, we use a least-squares formulation of the discrete version of the problem. To the best of our knowledge, this is the first work to consider a unified differential approach to solve photo-polarimetric shape estimation directly for height. Numerical results on synthetic and real-world data confirm the effectiveness of our proposed method.Comment: To appear at International Conference on Computer Vision (ICCV), Venice, Italy, October 22-29, 201

    Polarimetric Multi-View Inverse Rendering

    Full text link
    A polarization camera has great potential for 3D reconstruction since the angle of polarization (AoP) of reflected light is related to an object's surface normal. In this paper, we propose a novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that effectively exploits geometric, photometric, and polarimetric cues extracted from input multi-view color polarization images. We first estimate camera poses and an initial 3D model by geometric reconstruction with a standard structure-from-motion and multi-view stereo pipeline. We then refine the initial model by optimizing photometric and polarimetric rendering errors using multi-view RGB and AoP images, where we propose a novel polarimetric rendering cost function that enables us to effectively constrain each estimated surface vertex's normal while considering four possible ambiguous azimuth angles revealed from the AoP measurement. Experimental results using both synthetic and real data demonstrate that our Polarimetric MVIR can reconstruct a detailed 3D shape without assuming a specific polarized reflection depending on the material.Comment: Paper accepted in ECCV 202

    Polarimetric Multi-View Inverse Rendering

    Full text link
    A polarization camera has great potential for 3D reconstruction since the angle of polarization (AoP) and the degree of polarization (DoP) of reflected light are related to an object's surface normal. In this paper, we propose a novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that effectively exploits geometric, photometric, and polarimetric cues extracted from input multi-view color-polarization images. We first estimate camera poses and an initial 3D model by geometric reconstruction with a standard structure-from-motion and multi-view stereo pipeline. We then refine the initial model by optimizing photometric rendering errors and polarimetric errors using multi-view RGB, AoP, and DoP images, where we propose a novel polarimetric cost function that enables an effective constraint on the estimated surface normal of each vertex, while considering four possible ambiguous azimuth angles revealed from the AoP measurement. The weight for the polarimetric cost is effectively determined based on the DoP measurement, which is regarded as the reliability of polarimetric information. Experimental results using both synthetic and real data demonstrate that our Polarimetric MVIR can reconstruct a detailed 3D shape without assuming a specific surface material and lighting condition.Comment: Paper accepted in IEEE Transactions on Pattern Analysis and Machine Intelligence (2022). arXiv admin note: substantial text overlap with arXiv:2007.0883

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches

    High Resolution Surface Reconstruction of Cultural Heritage Objects Using Shape from Polarization Method

    Get PDF
    Nowadays, three-dimensional reconstruction is used in various fields like computer vision, computer graphics, mixed reality and digital twin. The three- dimensional reconstruction of cultural heritage objects is one of the most important applications in this area which is usually accomplished by close range photogrammetry. The problem here is that the images are often noisy, and the dense image matching method has significant limitations to reconstruct the geometric details of cultural heritage objects in practice. Therefore, displaying high-level details in three-dimensional models, especially for cultural heritage objects, is a severe challenge in this field. In this paper, the shape from polarization method has been investigated, a passive method with no drawbacks of active methods. In this method, the resolution of the depth maps can be dramatically increased using the information obtained from the polarization light by rotating a linear polarizing filter in front of a digital camera. Through these polarized images, the surface details of the object can be reconstructed locally with high accuracy. The fusion of polarization and photogrammetric methods is an appropriate solution for achieving high resolution three-dimensional reconstruction. The surface reconstruction assessments have been performed visually and quantitatively. The evaluations showed that the proposed method could significantly reconstruct the surfaces' details in the three-dimensional model compared to the photogrammetric method with 10 times higher depth resolution

    Polarimetric Pose Prediction

    Full text link
    Light has many properties that vision sensors can passively measure. Colour-band separated wavelength and intensity are arguably the most commonly used for monocular 6D object pose estimation. This paper explores how complementary polarisation information, i.e. the orientation of light wave oscillations, influences the accuracy of pose predictions. A hybrid model that leverages physical priors jointly with a data-driven learning strategy is designed and carefully tested on objects with different levels of photometric complexity. Our design significantly improves the pose accuracy compared to state-of-the-art photometric approaches and enables object pose estimation for highly reflective and transparent objects. A new multi-modal instance-level 6D object pose dataset with highly accurate pose annotations for multiple objects with varying photometric complexity is introduced as a benchmark.Comment: Accepted at ECCV 2022; 25 pages (14 main paper + References + 7 Appendix

    Linear Differential Constraints for Photo-polarimetric Height Estimation

    Get PDF
    In this paper we present a differential approach to photopolarimetric shape estimation. We propose several alternative differential constraints based on polarisation and photometric shading information and show how to express them in a unified partial differential system. Our method uses the image ratios technique to combine shading and polarisation information in order to directly reconstruct surface height, without first computing surface normal vectors. Moreover, we are able to remove the non-linearities so that the problem reduces to solving a linear differential problem. We also introduce a new method for estimating a polarisation image from multichannel data and, finally, we show it is possible to estimate the illumination directions in a two source setup, extending the method into an uncalibrated scenario. From a numerical point of view, we use a least-squares formulation of the discrete version of the problem. To the best of our knowledge, this is the first work to consider a unified differential approach to solve photo-polarimetric shape estimation directly for height. Numerical results on synthetic and real-world data confirm the effectiveness of our proposed method

    3D shape reconstruction using a polarisation reflectance model in conjunction with shading and stereo

    Get PDF
    Reconstructing the 3D geometry of objects from images is a fundamental problem in computer vision. This thesis focuses on shape from polarisation where the goal is to reconstruct a dense depth map from a sequence of polarisation images. Firstly, we propose a linear differential constraints approach to depth estimation from polarisation images. We demonstrate that colour images can deliver more robust polarimetric measurements compared to monochrome images. Then we explore different constraints by taking the polarisation images under two different light conditions with fixed view and show that a dense depth map, albedo map and refractive index can be recovered. Secondly, we propose a nonlinear method to reconstruct depth by an end-to-end method. We re-parameterise a polarisation reflectance model with respect to the depth map, and predict an optimum depth map by minimising an energy cost function between the prediction from the reflectance model and observed data using nonlinear least squares. Thirdly, we propose to enhance the polarisation camera with an additional RGB camera in a second view. We construct a higher-order graphical model by utilising an initial rough depth map estimated from the stereo views. The graphical model will correct the surface normal ambiguity which arises from the polarisation reflectance model. We then build a linear system to combine the corrected surface normal, polarimetric information and rough depth map to produce an accurate and dense depth map. Lastly, we derive a mixed polarisation model that describes specular and diffuse polarisation as well as mixtures of the two. This model is more physically accurate and allows us to decompose specular and diffuse reflectance from multiview images
    • …
    corecore