317 research outputs found
Extended depth-of-field imaging and ranging in a snapshot
Traditional approaches to imaging require that an increase in depth of field is associated with a reduction in
numerical aperture, and hence with a reduction in resolution and optical throughput. In their seminal
work, Dowski and Cathey reported how the asymmetric point-spread function generated by a cubic-phase
aberration encodes the detected image such that digital recovery can yield images with an extended depth of
field without sacrificing resolution [Appl. Opt. 34, 1859 (1995)]. Unfortunately recovered images are
generally visibly degraded by artifacts arising from subtle variations in point-spread functions with defocus.
We report a technique that involves determination of the spatially variant translation of image components
that accompanies defocus to enable determination of spatially variant defocus. This in turn enables recovery
of artifact-free, extended depth-of-field images together with a two-dimensional defocus and range map
of the imaged scene. We demonstrate the technique for high-quality macroscopic and microscopic imaging
of scenes presenting an extended defocus of up to two waves, and for generation of defocus maps with an
uncertainty of 0.036 waves
Generation of All-in-Focus Images by Noise-Robust Selective Fusion of Limited Depth-of-Field Images
The limited depth-of-field of some cameras prevents them from capturing perfectly focused images when the imaged scene covers a large distance range. In order to compensate for this problem, image fusion has been exploited for combining images captured with different camera settings, thus yielding a higher quality all-in-focus image. Since most current approaches for image fusion rely on maximizing the spatial frequency of the composed image, the fusion process is sensitive to noise. In this paper, a new algorithm for computing the all-in-focus image from a sequence of images captured with a low depth-of-field camera is presented. The proposed approach adaptively fuses the different frames of the focus sequence in order to reduce noise while preserving image features. The algorithm consists of three stages: 1) focus measure; 2) selectivity measure; 3) and image fusion. An extensive set of experimental tests has been carried out in order to compare the proposed algorithm with state-of-the-art all-in-focus methods using both synthetic and real sequences. The obtained results show the advantages of the proposed scheme even for high levels of noise
Modeling and applications of the focus cue in conventional digital cameras
El enfoque en cámaras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepción del entorno. Esta tesis estudia el enfoque en cámaras digitales convencionales, tales como cámaras de móviles, fotográficas, webcams y similares. Una revisión rigurosa de los conceptos teóricos detras del enfoque en cámaras convencionales muestra que, a pasar de su utilidad, el modelo clásico del thin lens presenta muchas limitaciones para aplicación en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos clásicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisición eficiente de imágenes, estimación de profundidad, integración de elementos perceptuales y fusión de imágenes. Los resultados experimentales muestran la aplicación exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models
Accurate depth from defocus estimation with video-rate implementation
The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth.
The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design.
The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate.
The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates
Surface Topography and Texture Restoration from Sectional Optical Imaging by Focus Analysis
International audienceThis chapter focused on image restoration of both topographical and textural information of an observed surface from a registered image sequence acquired by optical sectioning through the common concepts of Shape-From-Focus (SFF) and Extended Depth-of-Field (EDF). More particularly, the essential step of these complementary processes of restoration: the focus measurement, is examined. After a brief specialized review, we introduced novel evolved focus measurements that push the limits of state-of-the-art ones in terms of sensitivity and robustness, in order to cope with various frequently encountered acquisition issues
Non-convex optimization for 3D point source localization using a rotating point spread function
We consider the high-resolution imaging problem of 3D point source image
recovery from 2D data using a method based on point spread function (PSF)
engineering. The method involves a new technique, recently proposed by
S.~Prasad, based on the use of a rotating PSF with a single lobe to obtain
depth from defocus. The amount of rotation of the PSF encodes the depth
position of the point source. Applications include high-resolution single
molecule localization microscopy as well as the problem addressed in this paper
on localization of space debris using a space-based telescope. The localization
problem is discretized on a cubical lattice where the coordinates of nonzero
entries represent the 3D locations and the values of these entries the fluxes
of the point sources. Finding the locations and fluxes of the point sources is
a large-scale sparse 3D inverse problem. A new nonconvex regularization method
with a data-fitting term based on Kullback-Leibler (KL) divergence is proposed
for 3D localization for the Poisson noise model. In addition, we propose a new
scheme of estimation of the source fluxes from the KL data-fitting term.
Numerical experiments illustrate the efficiency and stability of the algorithms
that are trained on a random subset of image data before being applied to other
images. Our 3D localization algorithms can be readily applied to other kinds of
depth-encoding PSFs as well.Comment: 28 page
Accurate depth from defocus estimation with video-rate implementation
The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth. The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design. The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate. The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates.EThOS - Electronic Theses Online ServiceUniversity of Warwick (UoW)GBUnited Kingdo
Colour depth-from-defocus incorporating experimental point spread function measurements
Depth-From-Defocus (DFD) is a monocular computer vision technique for creating
depth maps from two images taken on the same optical axis with different intrinsic camera
parameters. A pre-processing stage for optimally converting colour images to monochrome
using a linear combination of the colour planes has been shown to improve the
accuracy of the depth map. It was found that the first component formed using Principal
Component Analysis (PCA) and a technique to maximise the signal-to-noise ratio (SNR)
performed better than using an equal weighting of the colour planes with an additive noise
model. When the noise is non-isotropic the Mean Square Error (MSE) of the depth map
by maximising the SNR was improved by 7.8 times compared to an equal weighting and
1.9 compared to PCA. The fractal dimension (FD) of a monochrome image gives a measure
of its roughness and an algorithm was devised to maximise its FD through colour
mixing. The formulation using a fractional Brownian motion (mm) model reduced the
SNR and thus produced depth maps that were less accurate than using PCA or an equal
weighting. An active DFD algorithm to reduce the image overlap problem has been
developed, called Localisation through Colour Mixing (LCM), that uses a projected colour
pattern. Simulation results showed that LCM produces a MSE 9.4 times lower than equal
weighting and 2.2 times lower than PCA.
The Point Spread Function (PSF) of a camera system models how a point source of
light is imaged. For depth maps to be accurately created using DFD a high-precision PSF
must be known. Improvements to a sub-sampled, knife-edge based technique are presented
that account for non-uniform illumination of the light box and this reduced the
MSE by 25%. The Generalised Gaussian is presented as a model of the PSF and shown to
be up to 16 times better than the conventional models of the Gaussian and pillbox
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
- …