375 research outputs found

    Learning Wavefront Coding for Extended Depth of Field Imaging

    Get PDF
    Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging

    Shape from periodic texture using the eigenvectors of local affine distortion

    Get PDF
    This paper shows how the local slant and tilt angles of regularly textured curved surfaces can be estimated directly, without the need for iterative numerical optimization, We work in the frequency domain and measure texture distortion using the affine distortion of the pattern of spectral peaks. The key theoretical contribution is to show that the directions of the eigenvectors of the affine distortion matrices can be used to estimate local slant and tilt angles of tangent planes to curved surfaces. In particular, the leading eigenvector points in the tilt direction. Although not as geometrically transparent, the direction of the second eigenvector can be used to estimate the slant direction. The required affine distortion matrices are computed using the correspondences between spectral peaks, established on the basis of their energy ordering. We apply the method to a variety of real-world and synthetic imagery

    Coded aperture imaging

    Get PDF
    This thesis studies the coded aperture camera, a device consisting of a conventional camera with a modified aperture mask, that enables the recovery of both depth map and all-in-focus image from a single 2D input image. Key contributions of this work are the modeling of the statistics of natural images and the design of efficient blur identification methods in a Bayesian framework. Two cases are distinguished: 1) when the aperture can be decomposed in a small set of identical holes, and 2) when the aperture has a more general configuration. In the first case, the formulation of the problem incorporates priors about the statistical variation of the texture to avoid ambiguities in the solution. This allows to bypass the recovery of the sharp image and concentrate only on estimating depth. In the second case, the depth reconstruction is addressed via convolutions with a bank of linear filters. Key advantages over competing methods are the higher numerical stability and the ability to deal with large blur. The all-in-focus image can then be recovered by using a deconvolution step with the estimated depth map. Furthermore, for the purpose of depth estimation alone, the proposed algorithm does not require information about the mask in use. The comparison with existing algorithms in the literature shows that the proposed methods achieve state-of-the-art performance. This solution is also extended for the first time to images affected by both defocus and motion blur and, finally, to video sequences with moving and deformable objects

    Single image defocus estimation by modified gaussian function

    Get PDF
    © 2019 John Wiley & Sons, Ltd. This article presents an algorithm to estimate the defocus blur from a single image. Most of the existing methods estimate the defocus blur at edge locations, which further involves the reblurring process. For this purpose, existing methods use the traditional Gaussian function in the phase of reblurring but it is found that the traditional Gaussian kernel is sensitive to the edges and can cause loss of edges information. Hence, there are more chances of missing spatially varying blur at edge locations. We offer the repeated averaging filters as an alternative to the traditional Gaussian function, which is more effective, and estimate the spatially varying defocus blur at edge locations. By using repeated averaging filters, a blur sparse map is computed. The obtained sparse map is propagated by integration of superpixels segmentation and transductive inference to estimate full defocus blur map. Our adopted method of repeated averaging filters has less computational time of defocus blur map estimation and has better visual estimates of the final defocus recovered map. Moreover, it has surpassed many previous state-of-the-art proposed systems in terms of quantative analysis

    Modeling and applications of the focus cue in conventional digital cameras

    Get PDF
    El enfoque en cámaras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepción del entorno. Esta tesis estudia el enfoque en cámaras digitales convencionales, tales como cámaras de móviles, fotográficas, webcams y similares. Una revisión rigurosa de los conceptos teóricos detras del enfoque en cámaras convencionales muestra que, a pasar de su utilidad, el modelo clásico del thin lens presenta muchas limitaciones para aplicación en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos clásicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisición eficiente de imágenes, estimación de profundidad, integración de elementos perceptuales y fusión de imágenes. Los resultados experimentales muestran la aplicación exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models

    Depth Acquisition from Digital Images

    Get PDF
    Introduction: Depth acquisition from digital images captured with a conventional camera, by analysing focus/defocus cues which are related to depth via an optical model of the camera, is a popular approach to depth-mapping a 3D scene. The majority of methods analyse the neighbourhood of a point in an image to infer its depth, which has disadvantages. A more elegant, but more difficult, solution is to evaluate only the single pixel displaying a point in order to infer its depth. This thesis investigates if a per-pixel method can be implemented without compromising accuracy and generality compared to window-based methods, whilst minimising the number of input images. Method: A geometric optical model of the camera was used to predict the relationship between focus/defocus and intensity at a pixel. Using input images with different focus settings, the relationship was used to identify the focal plane depth (i.e. focus setting) where a point is in best focus, from which the depth of the point can be resolved if camera parameters are known. Two metrics were implemented, one to identify the best focus setting for a point from the discrete input set, and one to fit a model to the input data to estimate the depth of perfect focus of the point on a continuous scale. Results: The method gave generally accurate results for a simple synthetic test scene, with a relatively low number of input images compared to similar methods. When tested on a more complex scene, the method achieved its objectives of separating complex objects from the background by depth, and produced a similar resolution of a complex 3D surface as a similar method which used significantly more input data. Conclusions: The method demonstrates that it is possible to resolve depth on a per-pixel basis without compromising accuracy and generality, and using a similar amount of input data, compared to more traditional window-based methods. In practice, the presented method offers a convenient new option for depth-based image processing applications, as the depth-map is per-pixel, but the process of capturing and preparing images for the method is not too practically cumbersome and could be easily automated unlike other per-pixel methods reviewed. However, the method still suffers from the general limitations of the depth acquisition approach using images from a conventional camera, which limits its use as a general depth acquisition solution beyond specifically depth-based image processing applications

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Actions of Anticholinesterases on Visual Performance in Man and Their Antagonism by Atropine

    Get PDF
    This work investigates the effects of the anticholinesterases physostigmine and pyridostigmine and the cholinergic antagonists atropine and homatropine on the human visual system. The antagonism between these classes of drug is also assessed. Anticholinesterases cause pupillary constriction and an increase in accommodation. As a means of simulating their effects in a controlled situation, a systematic study was performed to determine the effects of artificial pupils and defocusing lenses on visual performance. This was assessed by measuring contrast sensitivity for the detection of sinusoidal grating patterns. Contrast sensitivity was measured in 12 subjects for a range of spatial frequencies (0.5-38 c/deg), for pupil diameters 2 - 8mm and for defocuses of +1 to +4 D, following homatropine eyedrops. Changes in pupil diameter, without any compensation for the change in retinal illumination, had no significant effect on contrast sensitivity, except at 0.5 and 1 c/deg when a significant reduction occurred with the 2mm pupil. This suggests that the expected improvement in optical quality associated with smaller pupil diameters had been annulled by the accompanying reduction in retinal illumination. On the other hand, defocus caused an appreciable reduction in contrast sensitivity at spatial frequencies higher than the peak of the contrast sensitivity function (3 c/deg) and a smaller reduction below the peak. With increasing defocus a downwards parallel shift of the contrast sensitivity function above the peak was observed. Each dioptre of defocus reduced contrast sensitivity by about 50% at spatial frequencies higher than peak and 19% at spatial frequencies lower than peak in the homatropinised eye. The decrements were slightly less in the natural eye. An oral dose of 60mg pyridostigmine bromide which causes at least a 20% inhibition of blood cholinesterase, caused a small but significant increase of 7% in contrast sensitivity to stationary oscilloscopegenerated grating patterns over 3-38 c/deg, for a group of 13 subjects. This was attributed to an increase in ocular quality due to the small reduction in pupil diameter. Contrast sensitivity to laser interference fringes observed in the Maxwellian view, were unchanged after pyridostigmine. It is concluded that pyridostigmine may be used as a pre-treatment against organophosphorus anticholinesterases without adverse effects on stationary visual function. Instillation of 0. 25% physostigmine sulphate eyedrops in 12 subjects caused a sustained miosis, a transient increase of near point accommodation and amplitude of involuntary accommodation. This last effect was maximal at 30 min and subsided by 90 min, though its amplitude varied greatly between subjects from +0.5D to +10D. Comparisons between two families of three siblings suggested involvement of a genetic trait in the amplitude of response of the ciliary body to physostigmine. Contrast sensitivity to externally-viewed oscilloscope grating patterns was transiently reduced after physostigmine and correlated with the increase in amplitude of accommodation. Physostigmine had a transient deleterious effect on contrast sensitivity to laser interference fringes, particularly at higher spatial frequencies, which was not affected by defocus of the image. Physostigmine also caused a prolonged reduction in contrast sensitivity to low spatial frequency grating patterns. Since the control eye showed no miosis, systemic absorption of physostigmine seems improbable. This suggests that there is a direct effect from trans-comeal absorption of physostigmine on the retinal neurones. The effects of a single intramuscular injection of 2mg atropine sulphate on visual performance were studied in 13 subjects. The well known actions of atropine on heart rate, secretion of saliva, dilatation of pupils and reduction in the amplitude of accommodative range, were observed. However, visual acuity, stereoacuity, red-green colour balance and reaction time to a visual stimulus were unaffected by atropine, although extra-ocular muscle balance was transiently changed. There was no significant change in contrast sensitivity to stationary sinusoidal gratings of spatial frequencies 1-30 c/deg for oscilloscope-generated patterns and laser interference fringes. However, contrast sensitivity to low spatial frequency (1-5 c/deg) grating patterns phase-reversed at 5.5 Hz showed a sustained reduction over six hours post-injection. Thus, it is concluded that atropine has an adverse effect on movement detection but not on stationary visual function. (Abstract shortened by ProQuest.)

    Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    Get PDF
    Navigation through an indoor environment is a formidable challenge for an autonomous micro air vehicle. One solution is a vision aided inertial navigation system using depth-from-defocus to determine heading and depth to features in the scene. Depth-from-defocus uses a focal blur pattern to estimate depth. As depth increases, the observable change in the focal blur is generally reduced. Consequently, as the depth of a feature to be measured increases, the measurement performance decreases. The Fresnel zone plate, used as an aperture, introduces multiple focal planes. Interference between the multiple focal planes produce changes in the aperture that extend the depth at which changes in the focal blur are observable. This improved depth measurement performance results in improved performance of the vision aided navigation system as well. This research provides an in-depth study of the Fresnel zone plate used as a coded aperture and the performance improvement obtained by augmenting a single camera vision aided inertial navigation system
    corecore