7 research outputs found
Differentiable Display Photometric Stereo
Photometric stereo leverages variations in illumination conditions to
reconstruct per-pixel surface normals. The concept of display photometric
stereo, which employs a conventional monitor as an illumination source, has the
potential to overcome limitations often encountered in bulky and
difficult-to-use conventional setups. In this paper, we introduce
Differentiable Display Photometric Stereo (DDPS), a method designed to achieve
high-fidelity normal reconstruction using an off-the-shelf monitor and camera.
DDPS addresses a critical yet often neglected challenge in photometric stereo:
the optimization of display patterns for enhanced normal reconstruction. We
present a differentiable framework that couples basis-illumination image
formation with a photometric-stereo reconstruction method. This facilitates the
learning of display patterns that leads to high-quality normal reconstruction
through automatic differentiation. Addressing the synthetic-real domain gap
inherent in end-to-end optimization, we propose the use of a real-world
photometric-stereo training dataset composed of 3D-printed objects. Moreover,
to reduce the ill-posed nature of photometric stereo, we exploit the linearly
polarized light emitted from the monitor to optically separate diffuse and
specular reflections in the captured images. We demonstrate that DDPS allows
for learning display patterns optimized for a target configuration and is
robust to initialization. We assess DDPS on 3D-printed objects with
ground-truth normals and diverse real-world objects, validating that DDPS
enables effective photometric-stereo reconstruction
A full photometric and geometric model for attached webcam/matte screen devices
International audienceWe present a thorough photometric and geometric study of the multimedia devices composed of both a matte screen and an attached camera, where it is shown that the light emitted by an image displayed on the monitor can be expressed in closed-form at any point facing the screen, and that the geometric calibration of the camera attached to the screen can be simplified by introducing simple geometric constraints. These theoretical contributions are experimentally validated in a photometric stereo application with extended sources, where a colored scene is reconstructed while watching a collection of graylevel images displayed on the screen, providing a cheap and entertaining way to acquire realistic 3D-representations for, e.g., augmented reality
Semi-calibrated Near Field Photometric Stereo
3D reconstruction from shading information through Photometric Stereo is considered a very challenging problem in Computer Vision. Although this technique can potentially provide highly detailed shape recovery, its accuracy is critically dependent on a numerous set of factors among them the reliability of the light sources in emitting a constant amount of light. In this work, we propose a novel variational approach to solve the so called semi-calibrated near field Photometric Stereo problem, where the positions but not the brightness of the light sources are known. Additionally, we take into account realistic modeling features such as perspective viewing geometry and heterogeneous scene composition, containing both diffuse and specular objects. Furthermore, we also relax the point light source assumption that usually constraints the near field formulation by explicitly calculating the light attenuation maps. Synthetic experiments are performed for quantitative evaluation for a wide range of cases whilst real experiments provide comparisons, qualitatively outperforming the state of the art.EPSRC; Roberto Mecca is a Marie Curie Fellow of the Istituto Nazionale di Alta Matematica, Ital
Mutual Illumination Photometric Stereo
Many techniques have been developed in computer vision to recover three-dimensional shape from two-dimensional images. These techniques impose various combinations of assumptions/restrictions of conditions to produce a representation of shape (e.g. surface normals or a height map). Although great progress has been made it is a problem which remains far from solved. In this thesis we propose a new approach to shape recovery - namely `mutual illumination photometric stereo'. We exploit the presence of colourful mutual illumination in an environment to recover the shape of objects from a single image
Recommended from our members
Detailed and Practical 3D Reconstruction with Advanced Photometric Stereo Modelling
Object 3D reconstruction has always been one of the main objectives of computer vision. After many decades of research, most techniques are still unsuccessful at recovering high resolution surfaces, especially for objects with limited surface texture. Moreover, most shiny materials are particularly hard to reconstruct.
Photometric Stereo (PS), which operates by capturing multiple images under changing illumination has traditionally been one of the most successful techniques at recovering a large amount of surface details, by exploiting the relationship between shading and local shape. However, using PS has been highly impractical because most approaches are only applicable in a very controlled lab setting and limited to objects experiencing diffuse reflection.
Nevertheless, recent advances in differential modelling have made complicated Photometric Stereo models possible and variational optimisations for these kinds of models show remarkable resilience to real world imperfections such as non-Gaussian noise and other outliers. Thus, a highly accurate, photometric-based reconstruction system is now possible.
The contribution of this thesis is threefold. First of all, the Photometric Stereo model is extended in order to be able to deal with arbitrary ambient lighting. This is a step towards acquisition in a non-fully controlled lab setting. Secondly, the need for a priori knowledge of the light source brightness and attenuation characteristics is relaxed as an alternating optimisation procedure is proposed which is able to estimate these parameters. This extension allows for quick acquisition with inexpensive LEDs that exhibit unpredictable illumination characteristics (flickering etc). Finally, a volumetric parameterisation is proposed which allows one to tackle the multi-view Photometric Stereo problem in a similar manner, in a simple unified differential model. This final extension allows for complete object reconstruction merging information from multiple images taken from multiple viewpoints and variable illumination.
The theoretical work in this thesis is experimentally evaluated in a number of challenging real world experiments, with data captured by custom-made hardware. In addition, the applicability of the generality of the proposed models is demonstrated by presenting a differential model for the shape of polarisation problem, which leads to a unified optimisation problem, fusing information from both methods. This allows for the acquisition of geometrical information about objects such as semi-transparent glass, hitherto hard to deal with
Reconstruction tridimensionnelle par stéréophotométrie
Cette thèse traite de la reconstruction 3D par stéréophotométrie, qui consiste à utiliser plusieurs photographies d'une scène prises sous le même angle, mais sous différents éclairages. Nous nous intéressons dans un premier temps à des techniques robustes pour l'estimation des normales à la surface, et pour leur intégration en une carte de profondeur. Nous étudions ensuite deux situations où le problème est mal posé : lorsque les éclairages sont inconnus, ou lorsque seuls deux éclairages sont utilisés. La troisième partie est consacrée à l'étude de modèles plus réalistes, à la fois en ce qui concerne les éclairages et la réflectance de la surface. Ces trois premières parties nous amènent aux limites de la formulation classique de la stéréophotométrie : nous introduisons finalement, dans la partie 4, une reformulation variationnelle et différentielle du problème qui permet de dépasser ces limites