330 research outputs found
Simultaneous estimation of super-resolved scene and depth map from low resolution defocused observations
This paper presents a novel technique to simultaneously estimate the depth map and the focused image of a scene, both at a super-resolution, from its defocused observations. Super-resolution refers to the generation of high spatial resolution images from a sequence of low resolution images. Hitherto, the super-resolution technique has been restricted mostly to the intensity domain. In this paper, we extend the scope of super-resolution imaging to acquire depth estimates at high spatial resolution simultaneously. Given a sequence of low resolution, blurred, and noisy observations of a static scene, the problem is to generate a dense depth map at a resolution higher than one that can be generated from the observations as well as to estimate the true high resolution focused image. Both the depth and the image are modeled as separate Markov random fields (MRF) and a maximum a posteriori estimation method is used to recover the high resolution fields. Since there is no relative motion between the scene and the camera, as is the case with most of the super-resolution and structure recovery techniques, we do away with the correspondence problem
A total variation regularization based super-resolution reconstruction algorithm for digital video
Super-resolution (SR) reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV) regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.£.published_or_final_versio
Mathematical Model Development of Super-Resolution Image Wiener Restoration
In super-resolution (SR), a set of degraded low-resolution (LR) images are used to reconstruct a higher-resolution image that suffers from acquisition degradations. One way to boost SR images visual quality is to use restoration filters to remove reconstructed images artifacts. We propose an efficient method to optimally allocate the LR pixels on the high-resolution grid and introduce a mathematical derivation of a stochastic Wiener filter. It relies on the continuous-discrete-continuous model and is constrained by the periodic and nonperiodic interrelationships between the different frequency components of the proposed SR system. We analyze an end-to-end model and formulate the Wiener filter as a function of the parameters associated with the proposed SR system such as image gathering and display response indices, system average signal-to-noise ratio, and inter-subpixel shifts between the LR images. Simulation and experimental results demonstrate that the derived Wiener filter with the optimal allocation of LR images results in sharper reconstruction. When compared with other SR techniques, our approach outperforms them in both quality and computational time
Medical Image Imputation from Image Collections
We present an algorithm for creating high resolution anatomically plausible
images consistent with acquired clinical brain MRI scans with large inter-slice
spacing. Although large data sets of clinical images contain a wealth of
information, time constraints during acquisition result in sparse scans that
fail to capture much of the anatomy. These characteristics often render
computational analysis impractical as many image analysis algorithms tend to
fail when applied to such images. Highly specialized algorithms that explicitly
handle sparse slice spacing do not generalize well across problem domains. In
contrast, we aim to enable application of existing algorithms that were
originally developed for high resolution research scans to significantly
undersampled scans. We introduce a generative model that captures fine-scale
anatomical structure across subjects in clinical image collections and derive
an algorithm for filling in the missing data in scans with large inter-slice
spacing. Our experimental results demonstrate that the resulting method
outperforms state-of-the-art upsampling super-resolution techniques, and
promises to facilitate subsequent analysis not previously possible with scans
of this quality. Our implementation is freely available at
https://github.com/adalca/papago .Comment: Accepted at IEEE Transactions on Medical Imaging (\c{opyright} 2018
IEEE
Mathematical analysis of super-resolution methodology
The attainment of super resolution (SR) from a sequence of degraded undersampled images could be viewed as reconstruction of the high-resolution (HR) image from a finite set of its projections on a sampling lattice. This can then be formulated as an optimization problem whose solution is obtained by minimizing a cost function. The approaches adopted and their analysis to solve the formulated optimization problem are crucial, The image acquisition scheme is important in the modeling of the degradation process. The need for model accuracy is undeniable in the attainment of SR along with the design of the algorithm whose robust implementation will produce the desired quality in the presence of model parameter uncertainty. To keep the presentation focused and of reasonable size, data acquisition with multisensors instead of, say a video camera is considered.published_or_final_versio
- …