534 research outputs found

    Rational-operator-based depth-from-defocus approach to scene reconstruction

    Get PDF
    This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods

    Accurate depth from defocus estimation with video-rate implementation

    Get PDF
    The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth. The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design. The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate. The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates

    Jacobi–Fourier phase mask for wavefront coding

    Get PDF
    In this work we propose Jacobi–Fourier phase masks for wavefront coding-based imaging systems. The optical properties of the phase mask is study in detail and numerical simulation are shown. Pixel size and noise are taken into account for the deconvolution of images. Numerical simulations indicate that overall performance is better than of the well-known and commonly used trefoil phaseThis work was supported by the Spanish Ministry of Economía y Competitividad FIS2016-77319-C2-1-R, and FEDER, Xunta de Galicia/FEDER ED431E 2018/08. E. González Amador thanks to Consejo Nacional de Ciencia y Tecnología (CONACyT); with CVU no. 714742. Also, we thank by the support to PADES program; Award no. 2018-13-011-047S

    Equivalence of two optical quality metrics to predict the visual acuity of multifocal pseudophakic patients

    Get PDF
    This article studies the relationship between two metrics, the area under the modulation transfer function (MTFa) and the energy efficiency (EE), and their ability to predict the visual quality of patients implanted with multifocal intraocular lenses (IOLs). The optical quality of IOLs is assessed in vitro using two metrics, the MTFa and EE. We measured them for three different multifocal IOLs with parabolic phase profile using image formation, through-focus (TF) scanning, three R, G, B wavelengths, and two pupils. We analyzed the correlation between MTFa and EE. In parallel, clinical defocus curves of visual acuity (VA) were measured and averaged from sets of patients implanted with the same IOLs. An excellent linear correlation was found between the MTFa and EE for the considered IOLs, wavelengths and pupils (R2 > 0.9). We computed the polychromatic TF-MTFa, TF-EE, and derived mathematical relationships between each metrics and clinical average VA. MTFa and EE proved to be equivalent metrics to characterize the optical quality of the studied multifocal IOLs and also in terms of clinical VA predictability

    Coded aperture imaging

    Get PDF
    This thesis studies the coded aperture camera, a device consisting of a conventional camera with a modified aperture mask, that enables the recovery of both depth map and all-in-focus image from a single 2D input image. Key contributions of this work are the modeling of the statistics of natural images and the design of efficient blur identification methods in a Bayesian framework. Two cases are distinguished: 1) when the aperture can be decomposed in a small set of identical holes, and 2) when the aperture has a more general configuration. In the first case, the formulation of the problem incorporates priors about the statistical variation of the texture to avoid ambiguities in the solution. This allows to bypass the recovery of the sharp image and concentrate only on estimating depth. In the second case, the depth reconstruction is addressed via convolutions with a bank of linear filters. Key advantages over competing methods are the higher numerical stability and the ability to deal with large blur. The all-in-focus image can then be recovered by using a deconvolution step with the estimated depth map. Furthermore, for the purpose of depth estimation alone, the proposed algorithm does not require information about the mask in use. The comparison with existing algorithms in the literature shows that the proposed methods achieve state-of-the-art performance. This solution is also extended for the first time to images affected by both defocus and motion blur and, finally, to video sequences with moving and deformable objects

    Accurate depth from defocus estimation with video-rate implementation

    Get PDF
    The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth. The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design. The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate. The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates.EThOS - Electronic Theses Online ServiceUniversity of Warwick (UoW)GBUnited Kingdo

    Power-Balanced Hybrid Optics Boosted Design for Achromatic Extended-Depth-of-Field Imaging via Optimized Mixed OTF

    Get PDF
    The power-balanced hybrid optical imaging system is a special design of a diffractive computational camera, introduced in this paper, with image formation by a refractive lens and Multilevel Phase Mask (MPM). This system provides a long focal depth with low chromatic aberrations thanks to MPM and a high energy light concentration due to the refractive lens. We introduce the concept of optical power balance between the lens and MPM which controls the contribution of each element to modulate the incoming light. Additional unique features of our MPM design are the inclusion of quantization of the MPM's shape on the number of levels and the Fresnel order (thickness) using a smoothing function. To optimize optical power-balance as well as the MPM, we build a fully-differentiable image formation model for joint optimization of optical and imaging parameters for the proposed camera using Neural Network techniques. Additionally, we optimize a single Wiener-like optical transfer function (OTF) invariant to depth to reconstruct a sharp image. We numerically and experimentally compare the designed system with its counterparts, lensless and just-lens optical systems, for the visible wavelength interval (400-700)nm and the depth-of-field range (0.5-∞\inftym for numerical and 0.5-2m for experimental). The attained results demonstrate that the proposed system equipped with the optimal OTF overcomes its counterparts (even when they are used with optimized OTF) in terms of reconstruction quality for off-focus distances. The simulation results also reveal that optimizing the optical power-balance, Fresnel order, and the number of levels parameters are essential for system performance attaining an improvement of up to 5dB of PSNR using the optimized OTF compared with its counterpart lensless setup.Comment: 18 pages, 14 figure

    Synchronous nanoscale topographic and chemical mapping by differential-confocal controlled Raman microscopy

    Get PDF
    Confocal Raman microscopy is currently used for label-free optical sensing and imaging within the biological, engineering, and physical sciences as well as in industry. However, currently these methods have limitations, including their low spatial resolution and poor focus stability, that restrict the breadth of new applications. This paper now introduces differential-confocal controlled Raman microscopy as a technique that fuses differential confocal microscopy and Raman spectroscopy, enabling the point-to-point collection of three-dimensional nanoscale topographic information with the simultaneous reconstruction of corresponding chemical information. The microscope collects the scattered Raman light together with the Rayleigh light, both as Rayleigh scattered and reflected light (these are normally filtered out in conventional confocal Raman systems). Inherent in the design of the instrument is a significant improvement in the axial focusing resolution of topographical features in the image (to ∼1 nm ), which, when coupled with super-resolution image restoration, gives a lateral resolution of 220 nm. By using differential confocal imaging for controlling the Raman imaging, the system presents a significant enhancement of the focusing and measurement accuracy, precision, and stability (with an antidrift capability), mitigating against both thermal and vibrational artefacts. We also demonstrate an improved scan speed, arising as a consequence of the nonaxial scanning mode

    High-quality 3D shape measurement with binarized dual phase-shifting method

    Get PDF
    ABSTRACT 3-D technology is commonplace in today\u27s world. They are used in many dierent aspects of life. Researchers have been keen on 3-D shape measurement and 3-D reconstruction techniques in past decades as a result of inspirations from dierent applications ranging from manufacturing, medicine to entertainment. The techniques can be broadly divided into contact and non-contact techniques. The contact techniques like coordinate measuring machine (CMM) dates way back to 1950s. It has been used extensively in the industries since then. It becomes predominant in industrial inspections owing to its high accuracy in the order of m. As we know that quality control is an important part of modern industries hence the technology is enjoying great popularity. However, the main disadvantage of this method is its slow speeds due to its requirement of point-by-point touch. Also, since this is a contact process, it might deform a soft object while performing measurements. Such limitations led the researchers to explore non-contact measurement technologies (optical metrology techniques). There are a variety of optical techniques developed till now. Some of the well-known technologies include laser scanners, stereo vision, and structured light systems. The main limitation of laser scanners is its limited speed due to its point-by-point or line-by-line scanning process. The stereo vision uses two cameras which take pictures of the object at two dierent angles. Then epipolar geometry is used to determine the 3-D coordinates of points in real-world. Such technology imitates human vision, but it had a few limitations too like the diculty of correspondence detection for uniform or periodic textures. Hence structured light systems were introduced which addresses the aforementioned limitations. There are various techniques developed including 2-D pseudo-random codication, binary codication, N-ary codication and digital fringe projection (DFP). The limitation of 2-D pseudo-random codication technique is its inability to achieve high spatial resolution since any uniquely generated and projected feature requires a span of several projector pixels. The binary codication techniques reduce the requirement of 2-D features to 1-D ones. However, since there are only two intensities, it is dicult to differentiate between the individual pixels within each black or white stripe. The other disadvantage is that n patterns are required to encode 2n pixels, meaning that the measurement speeds will be severely affected if a scene is to be coded with high-resolution. Dierently, DFP uses continuous sinusoidal patterns. The usage of continuous patterns addresses the main disadvantage of binary codication (i.e. the inability of this technique to differentiate between pixels was resolved by using sinusoid patterns). Thus, the spatial resolution is increased up to camera-pixel-level. On the other hand, since the DFP technique used 8-bit sinusoid patterns, the speed of measurement is limited to the maximum refreshing rate of 8-bit images for many video projectors (e.g. 120 Hz). This made it inapplicable for measurements of highly dynamic scenes. In order to overcome this speed limitation, the binary defocussing technique was proposed which uses 1-bit patterns to produce sinusoidal prole by projector defocusing. Although this technique has signicantly boosted the measurement speed up to kHz-level, if the patterns are not properly defocused (nearly focused or overly defocused), increased phase noise or harmonic errors will deteriorate the reconstructed surface quality. In this thesis research, two techniques are proposed to overcome the limitations of both DFP and binary defocusing technique: binarized dual phase shifting (BDPS) technique and Hilbert binarized dual phase shifting technique (HBDPS). Both techniques were able to achieve high-quality 3-D shape measurements even when the projector is not sufficiently defocused. The harmonic error was reduced by 47% by the BDPS method, and 74% by the HBDPS method. Moreover, both methods use binary patterns which preserve the speed advantage of the binary technology, hence it is potentially applicable to simultaneous high-speed and high-accuracy 3D shape measurements
    • …
    corecore