1,499 research outputs found
Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l1/l2 Regularization
The l1/l2 ratio regularization function has shown good performance for
retrieving sparse signals in a number of recent works, in the context of blind
deconvolution. Indeed, it benefits from a scale invariance property much
desirable in the blind context. However, the l1/l2 function raises some
difficulties when solving the nonconvex and nonsmooth minimization problems
resulting from the use of such a penalty term in current restoration methods.
In this paper, we propose a new penalty based on a smooth approximation to the
l1/l2 function. In addition, we develop a proximal-based algorithm to solve
variational problems involving this function and we derive theoretical
convergence results. We demonstrate the effectiveness of our method through a
comparison with a recent alternating optimization strategy dealing with the
exact l1/l2 term, on an application to seismic data blind deconvolution.Comment: 5 page
A Noise-Robust Method with Smoothed \ell_1/\ell_2 Regularization for Sparse Moving-Source Mapping
The method described here performs blind deconvolution of the beamforming
output in the frequency domain. To provide accurate blind deconvolution,
sparsity priors are introduced with a smooth \ell_1/\ell_2 regularization term.
As the mean of the noise in the power spectrum domain is dependent on its
variance in the time domain, the proposed method includes a variance estimation
step, which allows more robust blind deconvolution. Validation of the method on
both simulated and real data, and of its performance, are compared with two
well-known methods from the literature: the deconvolution approach for the
mapping of acoustic sources, and sound density modeling
Convexity in source separation: Models, geometry, and algorithms
Source separation or demixing is the process of extracting multiple
components entangled within a signal. Contemporary signal processing presents a
host of difficult source separation problems, from interference cancellation to
background subtraction, blind deconvolution, and even dictionary learning.
Despite the recent progress in each of these applications, advances in
high-throughput sensor technology place demixing algorithms under pressure to
accommodate extremely high-dimensional signals, separate an ever larger number
of sources, and cope with more sophisticated signal and mixing models. These
difficulties are exacerbated by the need for real-time action in automated
decision-making systems.
Recent advances in convex optimization provide a simple framework for
efficiently solving numerous difficult demixing problems. This article provides
an overview of the emerging field, explains the theory that governs the
underlying procedures, and surveys algorithms that solve them efficiently. We
aim to equip practitioners with a toolkit for constructing their own demixing
algorithms that work, as well as concrete intuition for why they work
Image Reconstruction in Optical Interferometry
This tutorial paper describes the problem of image reconstruction from
interferometric data with a particular focus on the specific problems
encountered at optical (visible/IR) wavelengths. The challenging issues in
image reconstruction from interferometric data are introduced in the general
framework of inverse problem approach. This framework is then used to describe
existing image reconstruction algorithms in radio interferometry and the new
methods specifically developed for optical interferometry.Comment: accepted for publication in IEEE Signal Processing Magazin
- …