3,274 research outputs found
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Image Reconstruction in Optical Interferometry
This tutorial paper describes the problem of image reconstruction from
interferometric data with a particular focus on the specific problems
encountered at optical (visible/IR) wavelengths. The challenging issues in
image reconstruction from interferometric data are introduced in the general
framework of inverse problem approach. This framework is then used to describe
existing image reconstruction algorithms in radio interferometry and the new
methods specifically developed for optical interferometry.Comment: accepted for publication in IEEE Signal Processing Magazin
A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal
Unveiling meaningful geophysical information from seismic data requires to
deal with both random and structured "noises". As their amplitude may be
greater than signals of interest (primaries), additional prior information is
especially important in performing efficient signal separation. We address here
the problem of multiple reflections, caused by wave-field bouncing between
layers. Since only approximate models of these phenomena are available, we
propose a flexible framework for time-varying adaptive filtering of seismic
signals, using sparse representations, based on inaccurate templates. We recast
the joint estimation of adaptive filters and primaries in a new convex
variational formulation. This approach allows us to incorporate plausible
knowledge about noise statistics, data sparsity and slow filter variation in
parsimony-promoting wavelet frames. The designed primal-dual algorithm solves a
constrained minimization problem that alleviates standard regularization issues
in finding hyperparameters. The approach demonstrates significantly good
performance in low signal-to-noise ratio conditions, both for simulated and
real field seismic data
Extended object reconstruction in adaptive-optics imaging: the multiresolution approach
We propose the application of multiresolution transforms, such as wavelets
(WT) and curvelets (CT), to the reconstruction of images of extended objects
that have been acquired with adaptive optics (AO) systems. Such multichannel
approaches normally make use of probabilistic tools in order to distinguish
significant structures from noise and reconstruction residuals. Furthermore, we
aim to check the historical assumption that image-reconstruction algorithms
using static PSFs are not suitable for AO imaging. We convolve an image of
Saturn taken with the Hubble Space Telescope (HST) with AO PSFs from the 5-m
Hale telescope at the Palomar Observatory and add both shot and readout noise.
Subsequently, we apply different approaches to the blurred and noisy data in
order to recover the original object. The approaches include multi-frame blind
deconvolution (with the algorithm IDAC), myopic deconvolution with
regularization (with MISTRAL) and wavelets- or curvelets-based static PSF
deconvolution (AWMLE and ACMLE algorithms). We used the mean squared error
(MSE) and the structural similarity index (SSIM) to compare the results. We
discuss the strengths and weaknesses of the two metrics. We found that CT
produces better results than WT, as measured in terms of MSE and SSIM.
Multichannel deconvolution with a static PSF produces results which are
generally better than the results obtained with the myopic/blind approaches
(for the images we tested) thus showing that the ability of a method to
suppress the noise and to track the underlying iterative process is just as
critical as the capability of the myopic/blind approaches to update the PSF.Comment: In revision in Astronomy & Astrophysics. 19 pages, 13 figure
- …