2,453 research outputs found
Towards dynamic camera calibration for constrained flexible mirror imaging
Flexible mirror imaging systems consisting of a perspective
camera viewing a scene reflected in a flexible mirror can provide direct control over image field-of-view and resolution. However, calibration of such systems is difficult due to the vast range of possible mirror shapes
and the flexible nature of the system. This paper proposes the fundamentals of a dynamic calibration approach for flexible mirror imaging systems by examining the constrained case of single dimensional flexing.
The calibration process consists of an initial primary calibration stage followed by in-service dynamic calibration. Dynamic calibration uses a
linear approximation to initialise a non-linear minimisation step, the result of which is the estimate of the mirror surface shape. The method is
easier to implement than existing calibration methods for flexible mirror imagers, requiring only two images of a calibration grid for each dynamic
calibration update. Experimental results with both simulated and real data are presented that demonstrate the capabilities of the proposed approach
Cavlectometry: Towards Holistic Reconstruction of Large Mirror Objects
We introduce a method based on the deflectometry principle for the
reconstruction of specular objects exhibiting significant size and geometric
complexity. A key feature of our approach is the deployment of an Automatic
Virtual Environment (CAVE) as pattern generator. To unfold the full power of
this extraordinary experimental setup, an optical encoding scheme is developed
which accounts for the distinctive topology of the CAVE. Furthermore, we devise
an algorithm for detecting the object of interest in raw deflectometric images.
The segmented foreground is used for single-view reconstruction, the background
for estimation of the camera pose, necessary for calibrating the sensor system.
Experiments suggest a significant gain of coverage in single measurements
compared to previous methods. To facilitate research on specular surface
reconstruction, we will make our data set publicly available
Single View 3D Reconstruction under an Uncalibrated Camera and an Unknown Mirror Sphere
In this paper, we develop a novel self-calibration method for single view 3D reconstruction using a mirror sphere. Unlike other mirror sphere based reconstruction methods, our method needs neither the intrinsic parameters of the camera, nor the position and radius of the sphere be known. Based on eigen decomposition of the matrix representing the conic image of the sphere and enforcing a repeated eignvalue constraint, we derive an analytical solution for recovering the focal length of the camera given its principal point. We then introduce a robust algorithm for estimating both the principal point and the focal length of the camera by minimizing the differences between focal lengths estimated from multiple images of the sphere. We also present a novel approach for estimating both the principal point and focal length of the camera in the case of just one single image of the sphere. With the estimated camera intrinsic parameters, the position(s) of the sphere can be readily retrieved from the eigen decomposition(s) and a scaled 3D reconstruction follows. Experimental results on both synthetic and real data are presented, which demonstrate the feasibility and accuracy of our approach. © 2016 IEEE.postprin
Seeing the World through Your Eyes
The reflective nature of the human eye is an underappreciated source of
information about what the world around us looks like. By imaging the eyes of a
moving person, we can collect multiple views of a scene outside the camera's
direct line of sight through the reflections in the eyes. In this paper, we
reconstruct a 3D scene beyond the camera's line of sight using portrait images
containing eye reflections. This task is challenging due to 1) the difficulty
of accurately estimating eye poses and 2) the entangled appearance of the eye
iris and the scene reflections. Our method jointly refines the cornea poses,
the radiance field depicting the scene, and the observer's eye iris texture. We
further propose a simple regularization prior on the iris texture pattern to
improve reconstruction quality. Through various experiments on synthetic and
real-world captures featuring people with varied eye colors, we demonstrate the
feasibility of our approach to recover 3D scenes using eye reflections.Comment: CVPR 2024. First two authors contributed equally. Project page:
https://world-from-eyes.github.io
Self-calibration and motion recovery from silhouettes with two mirrors
LNCS v. 7724-7727 (pts. 1-4) entitled: Computer vision - ACCV 2012: 11th Asian Conference on Computer Vision ... 2012: revised selected papersThis paper addresses the problem of self-calibration and motion recovery from a single snapshot obtained under a setting of two mirrors. The mirrors are able to show five views of an object in one image. In this paper, the epipoles of the real and virtual cameras are firstly estimated from the intersection of the bitangent lines between corresponding images, from which we can easily derive the horizon of the camera plane. The imaged circular points and the angle between the mirrors can then be obtained from equal angles between the bitangent lines, by planar rectification. The silhouettes produced by reflections can be treated as a special circular motion sequence. With this observation, technique developed for calibrating a circular motion sequence can be exploited to simplify the calibration of a single-view two-mirror system. Different from the state-of-the-art approaches, only one snapshot is required in this work for self-calibrating a natural camera and recovering the poses of the two mirrors. This is more flexible than previous approaches which require at least two images. When more than a single image is available, each image can be calibrated independently and the problem of varying focal length does not complicate the calibration problem. After the calibration, the visual hull of the objects can be obtained from the silhouettes. Experimental results show the feasibility and the preciseness of the proposed approach. © 2013 Springer-Verlag.postprin
Differentiable Display Photometric Stereo
Photometric stereo leverages variations in illumination conditions to
reconstruct per-pixel surface normals. The concept of display photometric
stereo, which employs a conventional monitor as an illumination source, has the
potential to overcome limitations often encountered in bulky and
difficult-to-use conventional setups. In this paper, we introduce
Differentiable Display Photometric Stereo (DDPS), a method designed to achieve
high-fidelity normal reconstruction using an off-the-shelf monitor and camera.
DDPS addresses a critical yet often neglected challenge in photometric stereo:
the optimization of display patterns for enhanced normal reconstruction. We
present a differentiable framework that couples basis-illumination image
formation with a photometric-stereo reconstruction method. This facilitates the
learning of display patterns that leads to high-quality normal reconstruction
through automatic differentiation. Addressing the synthetic-real domain gap
inherent in end-to-end optimization, we propose the use of a real-world
photometric-stereo training dataset composed of 3D-printed objects. Moreover,
to reduce the ill-posed nature of photometric stereo, we exploit the linearly
polarized light emitted from the monitor to optically separate diffuse and
specular reflections in the captured images. We demonstrate that DDPS allows
for learning display patterns optimized for a target configuration and is
robust to initialization. We assess DDPS on 3D-printed objects with
ground-truth normals and diverse real-world objects, validating that DDPS
enables effective photometric-stereo reconstruction
- …