1,258 research outputs found
Blur aware metric depth estimation with multi-focus plenoptic cameras
While a traditional camera only captures one point of view of a scene, a
plenoptic or light-field camera, is able to capture spatial and angular
information in a single snapshot, enabling depth estimation from a single
acquisition. In this paper, we present a new metric depth estimation algorithm
using only raw images from a multi-focus plenoptic camera. The proposed
approach is especially suited for the multi-focus configuration where several
micro-lenses with different focal lengths are used. The main goal of our blur
aware depth estimation (BLADE) approach is to improve disparity estimation for
defocus stereo images by integrating both correspondence and defocus cues. We
thus leverage blur information where it was previously considered a drawback.
We explicitly derive an inverse projection model including the defocus blur
providing depth estimates up to a scale factor. A method to calibrate the
inverse model is then proposed. We thus take into account depth scaling to
achieve precise and accurate metric depth estimates. Our results show that
introducing defocus cues improves the depth estimation. We demonstrate the
effectiveness of our framework and depth scaling calibration on relative depth
estimation setups and on real-world 3D complex scenes with ground truth
acquired with a 3D lidar scanner.Comment: 21 pages, 12 Figures, 3 Table
Rational-operator-based depth-from-defocus approach to scene reconstruction
This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods
Highlighting objects of interest in an image by integrating saliency and depth
Stereo images have been captured primarily for 3D reconstruction in the past.
However, the depth information acquired from stereo can also be used along with
saliency to highlight certain objects in a scene. This approach can be used to
make still images more interesting to look at, and highlight objects of
interest in the scene. We introduce this novel direction in this paper, and
discuss the theoretical framework behind the approach. Even though we use depth
from stereo in this work, our approach is applicable to depth data acquired
from any sensor modality. Experimental results on both indoor and outdoor
scenes demonstrate the benefits of our algorithm
Focusing on out-of-focus : assessing defocus estimation algorithms for the benefit of automated image masking
Acquiring photographs as input for an image-based modelling pipeline is less trivial than often assumed. Photographs should be correctly exposed, cover the subject sufficiently from all possible angles, have the required spatial resolution, be devoid of any motion blur, exhibit accurate focus and feature an adequate depth of field. The last four characteristics all determine the " sharpness " of an image and the photogrammetric, computer vision and hybrid photogrammetric computer vision communities all assume that the object to be modelled is depicted " acceptably " sharp throughout the whole image collection. Although none of these three fields has ever properly quantified " acceptably sharp " , it is more or less standard practice to mask those image portions that appear to be unsharp due to the limited depth of field around the plane of focus (whether this means blurry object parts or completely out-of-focus backgrounds). This paper will assess how well-or ill-suited defocus estimating algorithms are for automatically masking a series of photographs, since this could speed up modelling pipelines with many hundreds or thousands of photographs. To that end, the paper uses five different real-world datasets and compares the output of three state-of-the-art edge-based defocus estimators. Afterwards, critical comments and plans for the future finalise this paper
The Application of Preconditioned Alternating Direction Method of Multipliers in Depth from Focal Stack
Post capture refocusing effect in smartphone cameras is achievable by using
focal stacks. However, the accuracy of this effect is totally dependent on the
combination of the depth layers in the stack. The accuracy of the extended
depth of field effect in this application can be improved significantly by
computing an accurate depth map which has been an open issue for decades. To
tackle this issue, in this paper, a framework is proposed based on
Preconditioned Alternating Direction Method of Multipliers (PADMM) for depth
from the focal stack and synthetic defocus application. In addition to its
ability to provide high structural accuracy and occlusion handling, the
optimization function of the proposed method can, in fact, converge faster and
better than state of the art methods. The evaluation has been done on 21 sets
of focal stacks and the optimization function has been compared against 5 other
methods. Preliminary results indicate that the proposed method has a better
performance in terms of structural accuracy and optimization in comparison to
the current state of the art methods.Comment: 15 pages, 8 figure
Leveraging blur information for plenoptic camera calibration
This paper presents a novel calibration algorithm for plenoptic cameras,
especially the multi-focus configuration, where several types of micro-lenses
are used, using raw images only. Current calibration methods rely on simplified
projection models, use features from reconstructed images, or require separated
calibrations for each type of micro-lens. In the multi-focus configuration, the
same part of a scene will demonstrate different amounts of blur according to
the micro-lens focal length. Usually, only micro-images with the smallest
amount of blur are used. In order to exploit all available data, we propose to
explicitly model the defocus blur in a new camera model with the help of our
newly introduced Blur Aware Plenoptic (BAP) feature. First, it is used in a
pre-calibration step that retrieves initial camera parameters, and second, to
express a new cost function to be minimized in our single optimization process.
Third, it is exploited to calibrate the relative blur between micro-images. It
links the geometric blur, i.e., the blur circle, to the physical blur, i.e.,
the point spread function. Finally, we use the resulting blur profile to
characterize the camera's depth of field. Quantitative evaluations in
controlled environment on real-world data demonstrate the effectiveness of our
calibrations.Comment: arXiv admin note: text overlap with arXiv:2004.0774
Extended depth-of-field imaging and ranging in a snapshot
Traditional approaches to imaging require that an increase in depth of field is associated with a reduction in
numerical aperture, and hence with a reduction in resolution and optical throughput. In their seminal
work, Dowski and Cathey reported how the asymmetric point-spread function generated by a cubic-phase
aberration encodes the detected image such that digital recovery can yield images with an extended depth of
field without sacrificing resolution [Appl. Opt. 34, 1859 (1995)]. Unfortunately recovered images are
generally visibly degraded by artifacts arising from subtle variations in point-spread functions with defocus.
We report a technique that involves determination of the spatially variant translation of image components
that accompanies defocus to enable determination of spatially variant defocus. This in turn enables recovery
of artifact-free, extended depth-of-field images together with a two-dimensional defocus and range map
of the imaged scene. We demonstrate the technique for high-quality macroscopic and microscopic imaging
of scenes presenting an extended defocus of up to two waves, and for generation of defocus maps with an
uncertainty of 0.036 waves
- …