32 research outputs found
Robust compressive sensing of sparse signals: A review
Compressive sensing generally relies on the L2-norm for data fidelity, whereas in many applications robust estimators are needed. Among the scenarios in which robust performance is required, applications where the sampling process is performed in the presence of impulsive noise, i.e. measurements are corrupted by outliers, are of particular importance. This article overviews robust nonlinear reconstruction strategies for sparse signals based on replacing the commonly used L2-norm by M-estimators as data fidelity functions. The derived methods outperform existing compressed sensing techniques in impulsive environments, while achieving good performance in light-tailed environments, thus offering a robust framework for CS
A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal
Unveiling meaningful geophysical information from seismic data requires to
deal with both random and structured "noises". As their amplitude may be
greater than signals of interest (primaries), additional prior information is
especially important in performing efficient signal separation. We address here
the problem of multiple reflections, caused by wave-field bouncing between
layers. Since only approximate models of these phenomena are available, we
propose a flexible framework for time-varying adaptive filtering of seismic
signals, using sparse representations, based on inaccurate templates. We recast
the joint estimation of adaptive filters and primaries in a new convex
variational formulation. This approach allows us to incorporate plausible
knowledge about noise statistics, data sparsity and slow filter variation in
parsimony-promoting wavelet frames. The designed primal-dual algorithm solves a
constrained minimization problem that alleviates standard regularization issues
in finding hyperparameters. The approach demonstrates significantly good
performance in low signal-to-noise ratio conditions, both for simulated and
real field seismic data
Composite Minimization: Proximity Algorithms and Their Applications
ABSTRACT
Image and signal processing problems of practical importance, such as incomplete
data recovery and compressed sensing, are often modeled as nonsmooth optimization
problems whose objective functions are the sum of two terms, each of which is the
composition of a prox-friendly function with a matrix. Therefore, there is a practical
need to solve such optimization problems. Besides the nondifferentiability of the
objective functions of the associated optimization problems and the larger dimension
of the underlying images and signals, the sum of the objective functions is not,
in general, prox-friendly, which makes solving the problems challenging. Many algorithms have been proposed in literature to attack these problems by making use of the prox-friendly functions in the problems. However, the efficiency of these algorithms
relies heavily on the underlying structures of the matrices, particularly for large scale
optimization problems. In this dissertation, we propose a novel algorithmic framework
that exploits the availability of the prox-friendly functions, without requiring
any structural information of the matrices. This makes our algorithms suitable for
large scale optimization problems of interest. We also prove the convergence of the
developed algorithms.
This dissertation has three main parts. In part 1, we consider the minimization
of functions that are the sum of the compositions of prox-friendly functions with
matrices. We characterize the solutions to the associated optimization problems as
the solutions of fixed point equations that are formulated in terms of the proximity operators of the dual of the prox-friendly functions. By making use of the flexibility
provided by this characterization, we develop a block Gauss-Seidel iterative scheme
for finding a solution to the optimization problem and prove its convergence. We
discuss the connection of our developed algorithms with some existing ones and point
out the advantages of our proposed scheme.
In part 2, we give a comprehensive study on the computation of the proximity
operator of the ℓp-norm with 0 ≤ p \u3c 1. Nonconvexity and non-smoothness have
been recognized as important features of many optimization problems in image and
signal processing. The nonconvex, nonsmooth â„“p-regularization has been recognized
as an efficient tool to identify the sparsity of wavelet coefficients of an image or signal
under investigation. To solve an â„“p-regularized optimization problem, the proximity
operator of the â„“p-norm needs to be computed in an accurate and computationally
efficient way. We first study the general properties of the proximity operator of the
â„“p-norm. Then, we derive the explicit form of the proximity operators of the â„“p-norm
for p ∈ {0, 1/2, 2/3, 1}. Using these explicit forms and the properties of the proximity
operator of the â„“p-norm, we develop an efficient algorithm to compute the proximity
operator of the â„“p-norm for any p between 0 and 1.
In part 3, the usefulness of the research results developed in the previous two
parts is demonstrated in two types of applications, namely, image restoration and
compressed sensing. A comparison with the results from some existing algorithms
is also presented. For image restoration, the results developed in part 1 are applied to solve the â„“2-TV and â„“1-TV models. The resulting restored images have higher
peak signal-to-noise ratios and the developed algorithms require less CPU time than
state-of-the-art algorithms. In addition, for compressed sensing applications, our
algorithm has smaller ℓ2- and ℓ∞-errors and shorter computation times than state-ofthe-
art algorithms. For compressed sensing with the â„“p-regularization, our numerical
simulations show smaller ℓ2- and ℓ∞-errors than that from the ℓ0-regularization and
â„“1-regularization. In summary, our numerical simulations indicate that not only can
our developed algorithms be applied to a wide variety of important optimization
problems, but also they are more accurate and computationally efficient than stateof-
the-art algorithms