279 research outputs found
Correlation Plenoptic Imaging With Entangled Photons
Plenoptic imaging is a novel optical technique for three-dimensional imaging
in a single shot. It is enabled by the simultaneous measurement of both the
location and the propagation direction of light in a given scene. In the
standard approach, the maximum spatial and angular resolutions are inversely
proportional, and so are the resolution and the maximum achievable depth of
focus of the 3D image. We have recently proposed a method to overcome such
fundamental limits by combining plenoptic imaging with an intriguing
correlation remote-imaging technique: ghost imaging. Here, we theoretically
demonstrate that correlation plenoptic imaging can be effectively achieved by
exploiting the position-momentum entanglement characterizing spontaneous
parametric down-conversion (SPDC) photon pairs. As a proof-of-principle
demonstration, we shall show that correlation plenoptic imaging with entangled
photons may enable the refocusing of an out-of-focus image at the same depth of
focus of a standard plenoptic device, but without sacrificing
diffraction-limited image resolution.Comment: 12 pages, 5 figure
From Calibration to Large-Scale Structure from Motion with Light Fields
Classic pinhole cameras project the multi-dimensional information of the light flowing through a scene onto a single 2D snapshot. This projection limits the information that can be reconstructed from the 2D acquisition. Plenoptic (or light field) cameras, on the other hand, capture a 4D slice of the plenoptic function, termed the “light field”. These cameras provide both spatial and angular information on the light flowing through a scene; multiple views are captured in a single photographic exposure facilitating various applications. This thesis is concerned with the modelling of light field (or plenoptic) cameras and the development of structure from motion pipelines using such cameras. Specifically, we develop a geometric model for a multi-focus plenoptic camera, followed by a complete pipeline for the calibration of the suggested model. Given a calibrated light field camera, we then remap the captured light field to a grid of pinhole images. We use these images to obtain metric 3D reconstruction through a novel framework for structure from motion with light fields. Finally, we suggest a linear and efficient approach for absolute pose estimation for light fields
Plenoptic Signal Processing for Robust Vision in Field Robotics
This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications
Efficient and Accurate Disparity Estimation from MLA-Based Plenoptic Cameras
This manuscript focuses on the processing images from microlens-array based plenoptic cameras. These cameras enable the capturing of the light field in a single shot, recording a greater amount of information with respect to conventional cameras, allowing to develop a whole new set of applications. However, the enhanced information introduces additional challenges and results in higher computational effort. For one, the image is composed of thousand of micro-lens images, making it an unusual case for standard image processing algorithms. Secondly, the disparity information has to be estimated from those micro-images to create a conventional image and a three-dimensional representation. Therefore, the work in thesis is devoted to analyse and propose methodologies to deal with plenoptic images. A full framework for plenoptic cameras has been built, including the contributions described in this thesis. A blur-aware calibration method to model a plenoptic camera, an optimization method to accurately select the best microlenses combination, an overview of the different types of plenoptic cameras and their representation. Datasets consisting of both real and synthetic images have been used to create a benchmark for different disparity estimation algorithm and to inspect the behaviour of disparity under different compression rates. A robust depth estimation approach has been developed for light field microscopy and image of biological samples
The Fresnel Zone Light Field Spectral Imager
This thesis provides a computational model and the first experimental demonstration of a Fresnel zone light field spectral imaging (FZLFSI) system. This type of system couples an axial dispersion binary diffractive optic with light field (plenoptic) camera designs providing a snapshot spectral imaging capability. A computational model of the system was developed based on wave optics methods using Fresnel propagation. It was validated experimentally and provides excellent demonstration of system capabilities. The experimentally demonstrated system was able to synthetically refocus monochromatic images across greater than a 100nm bandwidth. Furthermore, the demonstrated system was modeled to have a full range of approximately 400 to 800nm with close to a 15nm spectral sampling interval. While images of multiple diffraction orders were observed in the measured light fields, they did not degrade the system\u27s performance. Experimental demonstration also showed the capability to resolve between and process two different spectral signatures from a single snapshot. For future FZLFSI designs, the study noted there is a fundamental design trade-off, where improved spectral and spatial resolution reduces the spectral range of the system
Plenoptic Signal Processing for Robust Vision in Field Robotics
This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications
Leveraging blur information for plenoptic camera calibration
This paper presents a novel calibration algorithm for plenoptic cameras,
especially the multi-focus configuration, where several types of micro-lenses
are used, using raw images only. Current calibration methods rely on simplified
projection models, use features from reconstructed images, or require separated
calibrations for each type of micro-lens. In the multi-focus configuration, the
same part of a scene will demonstrate different amounts of blur according to
the micro-lens focal length. Usually, only micro-images with the smallest
amount of blur are used. In order to exploit all available data, we propose to
explicitly model the defocus blur in a new camera model with the help of our
newly introduced Blur Aware Plenoptic (BAP) feature. First, it is used in a
pre-calibration step that retrieves initial camera parameters, and second, to
express a new cost function to be minimized in our single optimization process.
Third, it is exploited to calibrate the relative blur between micro-images. It
links the geometric blur, i.e., the blur circle, to the physical blur, i.e.,
the point spread function. Finally, we use the resulting blur profile to
characterize the camera's depth of field. Quantitative evaluations in
controlled environment on real-world data demonstrate the effectiveness of our
calibrations.Comment: arXiv admin note: text overlap with arXiv:2004.0774
- …