173 research outputs found
Learning Wavefront Coding for Extended Depth of Field Imaging
Depth of field is an important factor of imaging systems that highly affects
the quality of the acquired spatial information. Extended depth of field (EDoF)
imaging is a challenging ill-posed problem and has been extensively addressed
in the literature. We propose a computational imaging approach for EDoF, where
we employ wavefront coding via a diffractive optical element (DOE) and we
achieve deblurring through a convolutional neural network. Thanks to the
end-to-end differentiable modeling of optical image formation and computational
post-processing, we jointly optimize the optical design, i.e., DOE, and the
deblurring through standard gradient descent methods. Based on the properties
of the underlying refractive lens and the desired EDoF range, we provide an
analytical expression for the search space of the DOE, which is instrumental in
the convergence of the end-to-end network. We achieve superior EDoF imaging
performance compared to the state of the art, where we demonstrate results with
minimal artifacts in various scenarios, including deep 3D scenes and broadband
imaging
Programmable Aperture Photography: An investigation into applications and methods
The fields of digital image processing (DIP) and computational photography are ever growing with new focuses on coded aperture imaging and its real-world applications. Research has shown that coded apertures are far superior to traditional circular apertures for various tasks. A variety of coded aperture patterns have been proposed and developed over the years for use in various applications such as defocus deblurring, depth estimation and light field acquisition. Traditional coded aperture masks are constructed from static materials such as cardboard and cannot be altered once their shapes have been defined. These masks are then physically inserted into the aperture plane of a camera-lens system which makes swapping between different patterned masks difficult. This is undesirable as optimal aperture patterns differ depending on application, scene content or imaging conditions and thus would need to be changed quickly and frequently. This dissertation proposes the design and development of a programmable aperture photography camera. The camera makes use of a liquid crystal display (LCD) as a programmable aperture. This allows one to change the aperture shape at a relatively high frame rate. All the benefits and drawbacks of the camera are evaluated. Firstly the task of performing deblurring and depth estimation is tested using existing and optimised aperture patterns on the LCD. A light field is then captured and used to synthesise virtual photographs and perform stereo vision. Thereafter, exposure correction is performed on a scene based on various degrees of illumination. The aperture pattern optimised online based on scene content outperformed generic coded apertures for defocus deblurring. The programmable aperture also performed well for depth estimation using an optimised pattern and existing coded apertures. Using the captured light field, refocused photographs were constructed and stereo vision performed to accurately calculate depth. Finally, the aperture could adjust to the different levels of illumination in the room to provide the correct exposure for image capture. Thus the camera provided all the advantages of traditional coded aperture imaging systems but without the disadvantage of having a static aperture in the aperture plane
Recommended from our members
Depth and Deblurring from a Spectrally-varying Depth-of-Field
We propose modifying the aperture of a conventional color camera so that the effective aperture size for one color channel is smaller than that for the other two. This produces an image where different color channels have different depths-of-field, and from this we can computationally recover scene depth, reconstruct an all-focus image and achieve synthetic re-focusing, all from a single shot. These capabilities are enabled by a spatio-spectral image model that encodes the statistical relationship between gradient profiles across color channels. This approach substantially improves depth accuracy over alternative single-shot coded-aperture designs, and since it avoids introducing additional spatial distortions and is light efficient, it allows high-quality deblurring and lower exposure times. We demonstrate these benefits with comparisons on synthetic data, as well as results on images captured with a prototype lens.Engineering and Applied Science
Near-invariant blur for depth and 2D motion via time-varying light field analysis
Recently, several camera designs have been proposed for either making defocus blur invariant to scene depth or making motion blur invariant to object motion. The benefit of such invariant capture is that no depth or motion estimation is required to remove the resultant spatially uniform blur. So far, the techniques have been studied separately for defocus and motion blur, and object motion has been assumed 1D (e.g., horizontal). This article explores a more general capture method that makes both defocus blur and motion blur nearly invariant to scene depth and in-plane 2D object motion. We formulate the problem as capturing a time-varying light field through a time-varying light field modulator at the lens aperture, and perform 5D (4D light field + 1D time) analysis of all the existing computational cameras for defocus/motion-only deblurring and their hybrids. This leads to a surprising conclusion that focus sweep, previously known as a depth-invariant capture method that moves the plane of focus through a range of scene depth during exposure, is near-optimal both in terms of depth and 2D motion invariance and in terms of high-frequency preservation for certain combinations of depth and motion ranges. Using our prototype camera, we demonstrate joint defocus and motion deblurring for moving scenes with depth variation
Coded aperture imaging
This thesis studies the coded aperture camera, a device consisting of a conventional
camera with a modified aperture mask, that enables the recovery
of both depth map and all-in-focus image from a single 2D input image.
Key contributions of this work are the modeling of the statistics of natural
images and the design of efficient blur identification methods in a Bayesian
framework. Two cases are distinguished: 1) when the aperture can be decomposed
in a small set of identical holes, and 2) when the aperture has a
more general configuration. In the first case, the formulation of the problem
incorporates priors about the statistical variation of the texture to avoid
ambiguities in the solution. This allows to bypass the recovery of the sharp
image and concentrate only on estimating depth. In the second case, the
depth reconstruction is addressed via convolutions with a bank of linear
filters. Key advantages over competing methods are the higher numerical
stability and the ability to deal with large blur. The all-in-focus image can
then be recovered by using a deconvolution step with the estimated depth
map. Furthermore, for the purpose of depth estimation alone, the proposed
algorithm does not require information about the mask in use. The
comparison with existing algorithms in the literature shows that the proposed
methods achieve state-of-the-art performance. This solution is also
extended for the first time to images affected by both defocus and motion
blur and, finally, to video sequences with moving and deformable objects
- …