42 research outputs found

    Superresolution imaging: A survey of current techniques

    Full text link
    Cristóbal, G., Gil, E., Šroubek, F., Flusser, J., Miravet, C., Rodríguez, F. B., “Superresolution imaging: A survey of current techniques”, Proceedings of SPIE - The International Society for Optical Engineering, 7074, 2008. Copyright 2008. Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy, and may exhibit insufficient spatial and temporal resolution. In particular, several external effects blur images. Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution (SR). The stability of these methods depends on having more than one image of the same frame. Differences between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a variational method that minimizes a regularized energy function with respect to the high resolution image and blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described. Comparative experiments on real data illustrate the robustness and utilization of both methods.This research has been partially supported by the following grants: TEC2007-67025/TCM, TEC2006-28009-E, BFI-2003-07276, TIN-2004-04363-C03-03 by the Spanish Ministry of Science and Innovation, and by PROFIT projects FIT-070000-2003-475 and FIT-330100-2004-91. Also, this work has been partially supported by the Czech Ministry of Education under the project No. 1M0572 (Research Center DAR) and by the Czech Science Foundation under the project No. GACR 102/08/1593 and the CSIC-CAS bilateral project 2006CZ002

    Computational Imaging and its Application

    Get PDF
    Traditional optical imaging systems have constrained angular and spatial resolution, depth of field, field of view, tolerance to aberrations and environmental conditions, and other image quality limitations. Computational imaging provided an opportunity to create new functionality and improve the performance of imaging systems by encoding the information optically and decoding it computationally. The design of a computational imaging system balances hardware costs and the accuracy and complexity of the algorithms. In this thesis, two computational imaging systems are presented: Randomized Aperture Imaging and Laser Suppression Imaging. The former system increases the angular resolution of telescopes by replacing a continuous primary mirror with an array of light-weight small mirror elements, which potentially allows telescopes to have very large diameter at a reduced cost. The latter imaging system protects camera sensors from laser effects such as dazzle by use of a phase coded pupil plane mask. Machine learning and deep learning based algorithms were investigated to restore high-fidelity images from the coded acquisitions. The proposed imaging systems are verified by experiment and numerical modeling, and improved performances are demonstrated in comparison with the state-of-the-art

    Image enhancement methods and applications in computational photography

    Get PDF
    Computational photography is currently a rapidly developing and cutting-edge topic in applied optics, image sensors and image processing fields to go beyond the limitations of traditional photography. The innovations of computational photography allow the photographer not only merely to take an image, but also, more importantly, to perform computations on the captured image data. Good examples of these innovations include high dynamic range imaging, focus stacking, super-resolution, motion deblurring and so on. Although extensive work has been done to explore image enhancement techniques in each subfield of computational photography, attention has seldom been given to study of the image enhancement technique of simultaneously extending depth of field and dynamic range of a scene. In my dissertation, I present an algorithm which combines focus stacking and high dynamic range (HDR) imaging in order to produce an image with both extended depth of field (DOF) and dynamic range than any of the input images. In this dissertation, I also investigate super-resolution image restoration from multiple images, which are possibly degraded by large motion blur. The proposed algorithm combines the super-resolution problem and blind image deblurring problem in a unified framework. The blur kernel for each input image is separately estimated. I also do not make any restrictions on the motion fields among images; that is, I estimate dense motion field without simplifications such as parametric motion. While the proposed super-resolution method uses multiple images to enhance spatial resolution from multiple regular images, single image super-resolution is related to techniques of denoising or removing blur from one single captured image. In my dissertation, space-varying point spread function (PSF) estimation and image deblurring for single image is also investigated. Regarding the PSF estimation, I do not make any restrictions on the type of blur or how the blur varies spatially. Once the space-varying PSF is estimated, space-varying image deblurring is performed, which produces good results even for regions where it is not clear what the correct PSF is at first. I also bring image enhancement applications to both personal computer (PC) and Android platform as computational photography applications

    Robust video super-resolution with registration efficiency adaptation

    Full text link

    Sub-pixel techniques to improve spatial resolution

    Get PDF
    Image acquisition using a scene sampling device generally results in a loss of fidelity in the acquired image, particularly if the scene contains high frequency features. Acquired images are also degraded by the blurring effects of acquisition filtering, image reconstruction, and additive noise effects. to compensate for these degradations, a digital restoration filter that attempts to partially eliminate the blurring while avoiding amplification of the noise effects is needed. In addition, to compensate for undersampling, a subpixel technique known as microscanning is required. This dissertation provides research into the spatial resolution enhancement of digital images based on subpixel techniques that will help to minimize the impact of these degradations. Subpixel techniques investigated include microscanning and estimation of the function that measures the amount of blurring incurred during acquisition. These techniques will be used in conjunction with a constrained least squares restoration filter to achieve the best possible representation of the original scene
    corecore