1,267 research outputs found
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Non-parametric PSF estimation from celestial transit solar images using blind deconvolution
Context: Characterization of instrumental effects in astronomical imaging is
important in order to extract accurate physical information from the
observations. The measured image in a real optical instrument is usually
represented by the convolution of an ideal image with a Point Spread Function
(PSF). Additionally, the image acquisition process is also contaminated by
other sources of noise (read-out, photon-counting). The problem of estimating
both the PSF and a denoised image is called blind deconvolution and is
ill-posed.
Aims: We propose a blind deconvolution scheme that relies on image
regularization. Contrarily to most methods presented in the literature, our
method does not assume a parametric model of the PSF and can thus be applied to
any telescope.
Methods: Our scheme uses a wavelet analysis prior model on the image and weak
assumptions on the PSF. We use observations from a celestial transit, where the
occulting body can be assumed to be a black disk. These constraints allow us to
retain meaningful solutions for the filter and the image, eliminating trivial,
translated and interchanged solutions. Under an additive Gaussian noise
assumption, they also enforce noise canceling and avoid reconstruction
artifacts by promoting the whiteness of the residual between the blurred
observations and the cleaned data.
Results: Our method is applied to synthetic and experimental data. The PSF is
estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for
SDO/AIA using the 2012 Venus transit. Results show that the proposed
non-parametric blind deconvolution method is able to estimate the core of the
PSF with a similar quality to parametric methods proposed in the literature. We
also show that, if these parametric estimations are incorporated in the
acquisition model, the resulting PSF outperforms both the parametric and
non-parametric methods.Comment: 31 pages, 47 figure
Understanding Kernel Size in Blind Deconvolution
Most blind deconvolution methods usually pre-define a large kernel size to
guarantee the support domain. Blur kernel estimation error is likely to be
introduced, yielding severe artifacts in deblurring results. In this paper, we
first theoretically and experimentally analyze the mechanism to estimation
error in oversized kernel, and show that it holds even on blurry images without
noises. Then to suppress this adverse effect, we propose a low rank-based
regularization on blur kernel to exploit the structural information in degraded
kernels, by which larger-kernel effect can be effectively suppressed. And we
propose an efficient optimization algorithm to solve it. Experimental results
on benchmark datasets show that the proposed method is comparable with the
state-of-the-arts by accordingly setting proper kernel size, and performs much
better in handling larger-size kernels quantitatively and qualitatively. The
deblurring results on real-world blurry images further validate the
effectiveness of the proposed method.Comment: Accepted by WACV 201
- …