289 research outputs found
A Review of Image Super Resolution using Deep Learning
The image processing methods collectively known as super-resolution have proven useful in creating high-quality images from a group of low-resolution photographic images. Single image super resolution (SISR) has been applied in a variety of fields. The paper offers an in-depth analysis of a few current picture super resolution works created in various domains. In order to comprehend the most current developments in the development of Image super resolution systems, these recent publications have been examined with particular emphasis paid to the domain for which these systems have been designed, image enhancement used or not, among other factors. To improve the accuracy of the image super resolution, a different deep learning techniques might be explored. In fact, greater research into the image super resolution in medical imaging is possible to improve the data's suitability for future analysis. In light of this, it can be said that there is a lot of scope for research in the field of medical imaging
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation
In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications
Guided Nonlocal Patch Regularization and Efficient Filtering-Based Inversion for Multiband Fusion
In multiband fusion, an image with a high spatial and low spectral resolution
is combined with an image with a low spatial but high spectral resolution to
produce a single multiband image having high spatial and spectral resolutions.
This comes up in remote sensing applications such as pansharpening~(MS+PAN),
hyperspectral sharpening~(HS+PAN), and HS-MS fusion~(HS+MS). Remote sensing
images are textured and have repetitive structures. Motivated by nonlocal
patch-based methods for image restoration, we propose a convex regularizer that
(i) takes into account long-distance correlations, (ii) penalizes patch
variation, which is more effective than pixel variation for capturing texture
information, and (iii) uses the higher spatial resolution image as a guide
image for weight computation. We come up with an efficient ADMM algorithm for
optimizing the regularizer along with a standard least-squares loss function
derived from the imaging model. The novelty of our algorithm is that by
expressing patch variation as filtering operations and by judiciously splitting
the original variables and introducing latent variables, we are able to solve
the ADMM subproblems efficiently using FFT-based convolution and
soft-thresholding. As far as the reconstruction quality is concerned, our
method is shown to outperform state-of-the-art variational and deep learning
techniques.Comment: Accepted in IEEE Transactions on Computational Imagin
Global Auto-regressive Depth Recovery via Iterative Non-local Filtering
Existing depth sensing techniques have many shortcomings in terms of resolution, completeness, and accuracy. The performance of 3-D broadcasting systems is therefore limited by the challenges of capturing high-resolution depth data. In this paper, we present a novel framework for obtaining high-quality depth images and multi-view depth videos from simple acquisition systems. We first propose a single depth image recovery algorithm based on auto-regressive (AR) correlations. A fixed-point iteration algorithm under the global AR modeling is derived to efficiently solve the large-scale quadratic programming. Each iteration is equivalent to a nonlocal filtering process with a residue feedback. Then, we extend our framework to an AR-based multi-view depth video recovery framework, where each depth map is recovered from low-quality measurements with the help of the corresponding color image, depth maps from neighboring views, and depth maps of temporally adjacent frames. AR coefficients on nonlocal spatiotemporal neighborhoods in the algorithm are designed to improve the recovery performance. We further discuss the connections between our model and other methods like graph-based tools, and demonstrate that our algorithms enjoy the advantages of both global and local methods. Experimental results on both the Middleburry datasets and other captured datasets finally show that our method is able to improve the performances of depth images and multi-view depth videos recovery compared with state-of-the-art approaches
A Deep Primal-Dual Network for Guided Depth Super-Resolution
In this paper we present a novel method to increase the spatial resolution of
depth images. We combine a deep fully convolutional network with a non-local
variational method in a deep primal-dual network. The joint network computes a
noise-free, high-resolution estimate from a noisy, low-resolution input depth
map. Additionally, a high-resolution intensity image is used to guide the
reconstruction in the network. By unrolling the optimization steps of a
first-order primal-dual algorithm and formulating it as a network, we can train
our joint method end-to-end. This not only enables us to learn the weights of
the fully convolutional network, but also to optimize all parameters of the
variational method and its optimization procedure. The training of such a deep
network requires a large dataset for supervision. Therefore, we generate
high-quality depth maps and corresponding color images with a physically based
renderer. In an exhaustive evaluation we show that our method outperforms the
state-of-the-art on multiple benchmarks.Comment: BMVC 201
- …