67 research outputs found
Superresolution imaging: A survey of current techniques
Cristóbal, G., Gil, E., Šroubek, F., Flusser, J., Miravet, C., Rodríguez, F. B., “Superresolution imaging: A survey of current techniques”, Proceedings of SPIE - The International Society for Optical Engineering, 7074, 2008. Copyright 2008. Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and
tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and
instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy,
and may exhibit insufficient spatial and temporal resolution. In particular, several external effects blur images.
Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution
(SR). The stability of these methods depends on having more than one image of the same frame. Differences
between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art
SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between
images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of
current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a
variational method that minimizes a regularized energy function with respect to the high resolution image and
blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution
image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good
SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described.
Comparative experiments on real data illustrate the robustness and utilization of both methods.This research has been partially supported by the following grants: TEC2007-67025/TCM, TEC2006-28009-E,
BFI-2003-07276, TIN-2004-04363-C03-03 by the Spanish Ministry of Science and Innovation, and by PROFIT
projects FIT-070000-2003-475 and FIT-330100-2004-91. Also, this work has been partially supported by the
Czech Ministry of Education under the project No. 1M0572 (Research Center DAR) and by the Czech Science
Foundation under the project No. GACR 102/08/1593 and the CSIC-CAS bilateral project 2006CZ002
Analysis of displacement errors in high-resolution image reconstruction with multisensors
An image-acquisition system composed of an array of sensors, where each sensor has a subarray of sensing elements of suitable size, has recently been popular for increasing the spatial resolution with high signal-to-noise ratio beyond the performance bound of technologies that constrain the manufacture of imaging devices. Small perturbations around the ideal subpixel locations of the sensing elements (responsible for capturing the sequence of undersampled degraded frames), because of imperfections in fabrication, limit the performance of the signal-processing algorithms for processing and integrating the acquired images for the desired enhanced resolution and quality. The contributions of this paper include an analysis of the displacement errors on the convergence rate of the iterative approach for solving the transform based preconditioned system of equations. Subsequently, it is established that the use of the MAP, L2 norm or H1 norm regularization functional leads to a proof of linear convergence of the conjugate gradient method in terms of the displacement errors caused by the imperfect subpixel locations. Results of simulation support the analytical results.published_or_final_versio
Mathematical Model Development of Super-Resolution Image Wiener Restoration
In super-resolution (SR), a set of degraded low-resolution (LR) images are used to reconstruct a higher-resolution image that suffers from acquisition degradations. One way to boost SR images visual quality is to use restoration filters to remove reconstructed images artifacts. We propose an efficient method to optimally allocate the LR pixels on the high-resolution grid and introduce a mathematical derivation of a stochastic Wiener filter. It relies on the continuous-discrete-continuous model and is constrained by the periodic and nonperiodic interrelationships between the different frequency components of the proposed SR system. We analyze an end-to-end model and formulate the Wiener filter as a function of the parameters associated with the proposed SR system such as image gathering and display response indices, system average signal-to-noise ratio, and inter-subpixel shifts between the LR images. Simulation and experimental results demonstrate that the derived Wiener filter with the optimal allocation of LR images results in sharper reconstruction. When compared with other SR techniques, our approach outperforms them in both quality and computational time
Super Resolution of Wavelet-Encoded Images and Videos
In this dissertation, we address the multiframe super resolution reconstruction problem for wavelet-encoded images and videos. The goal of multiframe super resolution is to obtain one or more high resolution images by fusing a sequence of degraded or aliased low resolution images of the same scene. Since the low resolution images may be unaligned, a registration step is required before super resolution reconstruction. Therefore, we first explore in-band (i.e. in the wavelet-domain) image registration; then, investigate super resolution. Our motivation for analyzing the image registration and super resolution problems in the wavelet domain is the growing trend in wavelet-encoded imaging, and wavelet-encoding for image/video compression. Due to drawbacks of widely used discrete cosine transform in image and video compression, a considerable amount of literature is devoted to wavelet-based methods. However, since wavelets are shift-variant, existing methods cannot utilize wavelet subbands efficiently. In order to overcome this drawback, we establish and explore the direct relationship between the subbands under a translational shift, for image registration and super resolution. We then employ our devised in-band methodology, in a motion compensated video compression framework, to demonstrate the effective usage of wavelet subbands. Super resolution can also be used as a post-processing step in video compression in order to decrease the size of the video files to be compressed, with downsampling added as a pre-processing step. Therefore, we present a video compression scheme that utilizes super resolution to reconstruct the high frequency information lost during downsampling. In addition, super resolution is a crucial post-processing step for satellite imagery, due to the fact that it is hard to update imaging devices after a satellite is launched. Thus, we also demonstrate the usage of our devised methods in enhancing resolution of pansharpened multispectral images
Adaptive Wiener Filter Super-Resolution of Color Filter Array Images
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data
All-passive pixel super-resolution of time-stretch imaging
Based on image encoding in a serial-temporal format, optical time-stretch
imaging entails a stringent requirement of state-of-the- art fast data
acquisition unit in order to preserve high image resolution at an ultrahigh
frame rate --- hampering the widespread utilities of such technology. Here, we
propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch
imaging that preserves pixel resolution at a relaxed sampling rate. It
harnesses the subpixel shifts between image frames inherently introduced by
asynchronous digital sampling of the continuous time-stretch imaging process.
Precise pixel registration is thus accomplished without any active
opto-mechanical subpixel-shift control or other additional hardware. Here, we
present the experimental pixel-SR image reconstruction pipeline that restores
high-resolution time-stretch images of microparticles and biological cells
(phytoplankton) at a relaxed sampling rate (approx. 2--5 GSa/s) --- more than
four times lower than the originally required readout rate (20 GSa/s) --- is
thus effective for high-throughput label-free, morphology-based cellular
classification down to single-cell precision. Upon integration with the
high-throughput image processing technology, this pixel-SR time- stretch
imaging technique represents a cost-effective and practical solution for large
scale cell-based phenotypic screening in biomedical diagnosis and machine
vision for quality control in manufacturing.Comment: 17 pages, 8 figure
DeepSUM: Deep Neural Network for Super-Resolution of Unregistered Multitemporal Images
Recently, convolutional neural networks (CNNs) have been successfully applied to many remote sensing problems. However, deep learning techniques for multi-image super-resolution (SR) from multitemporal unregistered imagery have received little attention so far. This article proposes a novel CNN-based technique that exploits both spatial and temporal correlations to combine multiple images. This novel framework integrates the spatial registration task directly inside the CNN, and allows one to exploit the representation learning capabilities of the network to enhance registration accuracy. The entire SR process relies on a single CNN with three main stages: shared 2-D convolutions to extract high-dimensional features from the input images; a subnetwork proposing registration filters derived from the high-dimensional feature representations; 3-D convolutions for slow fusion of the features from multiple images. The whole network can be trained end-to-end to recover a single high-resolution image from multiple unregistered low-resolution images. The method presented in this article is the winner of the PROBA-V SR challenge issued by the European Space Agency (ESA)
- …