388 research outputs found
Sparse recovery in bounded Riesz systems with applications to numerical methods for PDEs
We study sparse recovery with structured random measurement matrices having
independent, identically distributed, and uniformly bounded rows and with a
nontrivial covariance structure. This class of matrices arises from random
sampling of bounded Riesz systems and generalizes random partial Fourier
matrices. Our main result improves the currently available results for the null
space and restricted isometry properties of such random matrices. The main
novelty of our analysis is a new upper bound for the expectation of the
supremum of a Bernoulli process associated with a restricted isometry constant.
We apply our result to prove new performance guarantees for the CORSING method,
a recently introduced numerical approximation technique for partial
differential equations (PDEs) based on compressive sensing
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Suppose we are given a vector in . How many linear measurements do
we need to make about to be able to recover to within precision
in the Euclidean () metric? Or more exactly, suppose we are
interested in a class of such objects--discrete digital signals,
images, etc; how many linear measurements do we need to recover objects from
this class to within accuracy ? This paper shows that if the objects
of interest are sparse or compressible in the sense that the reordered entries
of a signal decay like a power-law (or if the coefficient
sequence of in a fixed basis decays like a power-law), then it is possible
to reconstruct to within very high accuracy from a small number of random
measurements.Comment: 39 pages; no figures; to appear. Bernoulli ensemble proof has been
corrected; other expository and bibliographical changes made, incorporating
referee's suggestion
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …