176 research outputs found

    Comparison of super-resolution algorithms applied to retinal images

    Get PDF
    A critical challenge in biomedical imaging is to optimally balance the trade-off among image resolution, signal-to-noise ratio, and acquisition time. Acquiring a high-resolution image is possible; however, it is either expensive or time consuming or both. Resolution is also limited by the physical properties of the imaging device, such as the nature and size of the input source radiation and the optics of the device. Super-resolution (SR), which is an off-line approach for improving the resolution of an image, is free of these trade-offs. Several methodologies, such as interpolation, frequency domain, regularization, and learning-based approaches, have been developed over the past several years for SR of natural images. We review some of these methods and demonstrate the positive impact expected from SR of retinal images and investigate the performance of various SR techniques. We use a fundus image as an example for simulations

    A Study on Super-Resolution Image Reconstruction Techniques

    Get PDF
    With the rapid development of space technology and its related technologies, more and more remote sensing platforms are sent to outer space to survey our earth. Recognizing and positioning all these space objects is the basis of knowing about the space, but there are no other effective methods in space target recognition except orbit and radio signal recognition. Super-resolution image reconstruction, which is based on the image of space objects, provides an effective way of solving this problem. In this paper, the principle of super-resolution image reconstruction and several typical reconstruction methods were introduced. By comparison, Nonparametric Finite Support Restoration Techniques were analyzed in details. At last, several aspects of super-resolution image reconstruction that should be studied further more were put forward

    Mathematical Model Development of Super-Resolution Image Wiener Restoration

    Get PDF
    In super-resolution (SR), a set of degraded low-resolution (LR) images are used to reconstruct a higher-resolution image that suffers from acquisition degradations. One way to boost SR images visual quality is to use restoration filters to remove reconstructed images artifacts. We propose an efficient method to optimally allocate the LR pixels on the high-resolution grid and introduce a mathematical derivation of a stochastic Wiener filter. It relies on the continuous-discrete-continuous model and is constrained by the periodic and nonperiodic interrelationships between the different frequency components of the proposed SR system. We analyze an end-to-end model and formulate the Wiener filter as a function of the parameters associated with the proposed SR system such as image gathering and display response indices, system average signal-to-noise ratio, and inter-subpixel shifts between the LR images. Simulation and experimental results demonstrate that the derived Wiener filter with the optimal allocation of LR images results in sharper reconstruction. When compared with other SR techniques, our approach outperforms them in both quality and computational time

    Single Frame Image super Resolution using Learned Directionlets

    Full text link
    In this paper, a new directionally adaptive, learning based, single image super resolution method using multiple direction wavelet transform, called Directionlets is presented. This method uses directionlets to effectively capture directional features and to extract edge information along different directions of a set of available high resolution images .This information is used as the training set for super resolving a low resolution input image and the Directionlet coefficients at finer scales of its high-resolution image are learned locally from this training set and the inverse Directionlet transform recovers the super-resolved high resolution image. The simulation results showed that the proposed approach outperforms standard interpolation techniques like Cubic spline interpolation as well as standard Wavelet-based learning, both visually and in terms of the mean squared error (mse) values. This method gives good result with aliased images also.Comment: 14 pages,6 figure

    Superresolution Enhancement of Hyperspectral CHRIS/Proba Images With a Thin-Plate Spline Nonrigid Transform Model

    Get PDF
    Given the hyperspectral-oriented waveband configuration of multiangular CHRIS/Proba imagery, the scope of its application could widen if the present 18-m resolution would be improved. The multiangular images of CHRIS could be used as input for superresolution (SR) image reconstruction. A critical procedure in SR is an accurate registration of the low-resolution images. Conventional methods based on affine transformation may not be effective given the local geometric distortion in high off-nadir angular images. This paper examines the use of a non-rigid transform to improve the result of a nonuniform interpolation and deconvolution SR method. A scale-invariant feature transform is used to collect control points (CPs). To ensure the quality of CPs, a rigorous screening procedure is designed: 1) an ambiguity test; 2) the m-estimator sample consensus method; and 3) an iterative method using statistical characteristics of the distribution of random errors. A thin-plate spline (TPS) nonrigid transform is then used for the registration. The proposed registration method is examined with a Delaunay triangulation-based nonuniform interpolation and reconstruction SR method. Our results show that the TPS nonrigid transform allows accurate registration of angular images. SR results obtained from simulated LR images are evaluated using three quantitative measures, namely, relative mean-square error, structural similarity, and edge stability. Compared to the SR methods that use an affine transform, our proposed method performs better with all three evaluation measures. With a higher level of spatial detail, SR-enhanced CHRIS images might be more effective than the original data in various applications.JRC.H.7-Climate Risk Managemen

    Super Resolution Imaging Needs Better Registration for Better Quality Results

    Full text link
    In this paper, trade-off between effect of registration error and number of images used in the process of super resolution image reconstruction is studied. Super Resolution image reconstruction is three phase process, of which registration is of at most importance. Super resolution image reconstruction uses set of low resolution images to reconstruct high resolution image during registration. The study demonstrates the effects of registration error and benefit of more number of low resolution images on the quality of reconstructed image. Study reveals that the registration error degrades the reconstructed image and without better registration methodology, a better super resolution method is still not of any use. It is noticed that without further improvement in the registration technique, not much improvement can be achieved by increasing number of input low resolution images

    Superresolution imaging: A survey of current techniques

    Full text link
    Cristóbal, G., Gil, E., Šroubek, F., Flusser, J., Miravet, C., Rodríguez, F. B., “Superresolution imaging: A survey of current techniques”, Proceedings of SPIE - The International Society for Optical Engineering, 7074, 2008. Copyright 2008. Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy, and may exhibit insufficient spatial and temporal resolution. In particular, several external effects blur images. Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution (SR). The stability of these methods depends on having more than one image of the same frame. Differences between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a variational method that minimizes a regularized energy function with respect to the high resolution image and blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described. Comparative experiments on real data illustrate the robustness and utilization of both methods.This research has been partially supported by the following grants: TEC2007-67025/TCM, TEC2006-28009-E, BFI-2003-07276, TIN-2004-04363-C03-03 by the Spanish Ministry of Science and Innovation, and by PROFIT projects FIT-070000-2003-475 and FIT-330100-2004-91. Also, this work has been partially supported by the Czech Ministry of Education under the project No. 1M0572 (Research Center DAR) and by the Czech Science Foundation under the project No. GACR 102/08/1593 and the CSIC-CAS bilateral project 2006CZ002

    Deep Burst Denoising

    Full text link
    Noise is an inherent issue of low-light image capture, one which is exacerbated on mobile devices due to their narrow apertures and small sensors. One strategy for mitigating noise in a low-light situation is to increase the shutter time of the camera, thus allowing each photosite to integrate more light and decrease noise variance. However, there are two downsides of long exposures: (a) bright regions can exceed the sensor range, and (b) camera and scene motion will result in blurred images. Another way of gathering more light is to capture multiple short (thus noisy) frames in a "burst" and intelligently integrate the content, thus avoiding the above downsides. In this paper, we use the burst-capture strategy and implement the intelligent integration via a recurrent fully convolutional deep neural net (CNN). We build our novel, multiframe architecture to be a simple addition to any single frame denoising model, and design to handle an arbitrary number of noisy input frames. We show that it achieves state of the art denoising results on our burst dataset, improving on the best published multi-frame techniques, such as VBM4D and FlexISP. Finally, we explore other applications of image enhancement by integrating content from multiple frames and demonstrate that our DNN architecture generalizes well to image super-resolution
    corecore