495 research outputs found

    Development Of A High Performance Mosaicing And Super-Resolution Algorithm

    Get PDF
    In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm

    An Improved Observation Model for Super-Resolution under Affine Motion

    Full text link
    Super-resolution (SR) techniques make use of subpixel shifts between frames in an image sequence to yield higher-resolution images. We propose an original observation model devoted to the case of non isometric inter-frame motion as required, for instance, in the context of airborne imaging sensors. First, we describe how the main observation models used in the SR literature deal with motion, and we explain why they are not suited for non isometric motion. Then, we propose an extension of the observation model by Elad and Feuer adapted to affine motion. This model is based on a decomposition of affine transforms into successive shear transforms, each one efficiently implemented by row-by-row or column-by-column 1-D affine transforms. We demonstrate on synthetic and real sequences that our observation model incorporated in a SR reconstruction technique leads to better results in the case of variable scale motions and it provides equivalent results in the case of isometric motions

    Super-resolution Using Adaptive Wiener Filters

    Get PDF
    The spatial sampling rate of an imaging system is determined by the spacing of the detectors in the focal plane array (FPA). The spatial frequencies present in the image on the focal plane are band-limited by the optics. This is due to diffraction through a finite aperture. To guarantee that there will be no aliasing during image acquisiton, the Nyquist criterion dictates that the sampling rate must be greater than twice the cut-off frequency of the optics. However, optical designs involve a number of trade-offs and typical imaging systems are designed with some level of aliasing. We will refer to such systems as detector limited, as opposed to optically limited. Furthermore, with or without aliasing, imaging systems invariably suffer from diffraction blur, optical abberations, and noise. Multiframe super-resolution (SR) processing has proven to be successful in reducing aliasing and enhancing the resolution of images from detector limited imaging systems

    Spatiotemporal super-resolution for low bitrate H.264 video

    Get PDF

    Image enhancement methods and applications in computational photography

    Get PDF
    Computational photography is currently a rapidly developing and cutting-edge topic in applied optics, image sensors and image processing fields to go beyond the limitations of traditional photography. The innovations of computational photography allow the photographer not only merely to take an image, but also, more importantly, to perform computations on the captured image data. Good examples of these innovations include high dynamic range imaging, focus stacking, super-resolution, motion deblurring and so on. Although extensive work has been done to explore image enhancement techniques in each subfield of computational photography, attention has seldom been given to study of the image enhancement technique of simultaneously extending depth of field and dynamic range of a scene. In my dissertation, I present an algorithm which combines focus stacking and high dynamic range (HDR) imaging in order to produce an image with both extended depth of field (DOF) and dynamic range than any of the input images. In this dissertation, I also investigate super-resolution image restoration from multiple images, which are possibly degraded by large motion blur. The proposed algorithm combines the super-resolution problem and blind image deblurring problem in a unified framework. The blur kernel for each input image is separately estimated. I also do not make any restrictions on the motion fields among images; that is, I estimate dense motion field without simplifications such as parametric motion. While the proposed super-resolution method uses multiple images to enhance spatial resolution from multiple regular images, single image super-resolution is related to techniques of denoising or removing blur from one single captured image. In my dissertation, space-varying point spread function (PSF) estimation and image deblurring for single image is also investigated. Regarding the PSF estimation, I do not make any restrictions on the type of blur or how the blur varies spatially. Once the space-varying PSF is estimated, space-varying image deblurring is performed, which produces good results even for regions where it is not clear what the correct PSF is at first. I also bring image enhancement applications to both personal computer (PC) and Android platform as computational photography applications
    • …
    corecore