23,453 research outputs found
Exact Recovery Conditions for Sparse Representations with Partial Support Information
We address the exact recovery of a k-sparse vector in the noiseless setting
when some partial information on the support is available. This partial
information takes the form of either a subset of the true support or an
approximate subset including wrong atoms as well. We derive a new sufficient
and worst-case necessary (in some sense) condition for the success of some
procedures based on lp-relaxation, Orthogonal Matching Pursuit (OMP) and
Orthogonal Least Squares (OLS). Our result is based on the coherence "mu" of
the dictionary and relaxes the well-known condition mu<1/(2k-1) ensuring the
recovery of any k-sparse vector in the non-informed setup. It reads
mu<1/(2k-g+b-1) when the informed support is composed of g good atoms and b
wrong atoms. We emphasize that our condition is complementary to some
restricted-isometry based conditions by showing that none of them implies the
other.
Because this mutual coherence condition is common to all procedures, we carry
out a finer analysis based on the Null Space Property (NSP) and the Exact
Recovery Condition (ERC). Connections are established regarding the
characterization of lp-relaxation procedures and OMP in the informed setup.
First, we emphasize that the truncated NSP enjoys an ordering property when p
is decreased. Second, the partial ERC for OMP (ERC-OMP) implies in turn the
truncated NSP for the informed l1 problem, and the truncated NSP for p<1.Comment: arXiv admin note: substantial text overlap with arXiv:1211.728
Coherence-based Partial Exact Recovery Condition for OMP/OLS
We address the exact recovery of the support of a k-sparse vector with
Orthogonal Matching Pursuit (OMP) and Orthogonal Least Squares (OLS) in a
noiseless setting. We consider the scenario where OMP/OLS have selected good
atoms during the first l iterations (l<k) and derive a new sufficient and
worst-case necessary condition for their success in k steps. Our result is
based on the coherence \mu of the dictionary and relaxes Tropp's well-known
condition \mu<1/(2k-1) to the case where OMP/OLS have a partial knowledge of
the support
Relaxed Recovery Conditions for OMP/OLS by Exploiting both Coherence and Decay
We propose extended coherence-based conditions for exact sparse support
recovery using orthogonal matching pursuit (OMP) and orthogonal least squares
(OLS). Unlike standard uniform guarantees, we embed some information about the
decay of the sparse vector coefficients in our conditions. As a result, the
standard condition (where denotes the mutual coherence and
the sparsity level) can be weakened as soon as the non-zero coefficients
obey some decay, both in the noiseless and the bounded-noise scenarios.
Furthermore, the resulting condition is approaching for strongly
decaying sparse signals. Finally, in the noiseless setting, we prove that the
proposed conditions, in particular the bound , are the tightest
achievable guarantees based on mutual coherence
Inferring Rankings Using Constrained Sensing
We consider the problem of recovering a function over the space of
permutations (or, the symmetric group) over elements from given partial
information; the partial information we consider is related to the group
theoretic Fourier Transform of the function. This problem naturally arises in
several settings such as ranked elections, multi-object tracking, ranking
systems, and recommendation systems. Inspired by the work of Donoho and Stark
in the context of discrete-time functions, we focus on non-negative functions
with a sparse support (support size domain size). Our recovery method is
based on finding the sparsest solution (through optimization) that is
consistent with the available information. As the main result, we derive
sufficient conditions for functions that can be recovered exactly from partial
information through optimization. Under a natural random model for the
generation of functions, we quantify the recoverability conditions by deriving
bounds on the sparsity (support size) for which the function satisfies the
sufficient conditions with a high probability as .
optimization is computationally hard. Therefore, the popular compressive
sensing literature considers solving the convex relaxation,
optimization, to find the sparsest solution. However, we show that
optimization fails to recover a function (even with constant sparsity)
generated using the random model with a high probability as . In
order to overcome this problem, we propose a novel iterative algorithm for the
recovery of functions that satisfy the sufficient conditions. Finally, using an
Information Theoretic framework, we study necessary conditions for exact
recovery to be possible.Comment: 19 page
A* Orthogonal Matching Pursuit: Best-First Search for Compressed Sensing Signal Recovery
Compressed sensing is a developing field aiming at reconstruction of sparse
signals acquired in reduced dimensions, which make the recovery process
under-determined. The required solution is the one with minimum norm
due to sparsity, however it is not practical to solve the minimization
problem. Commonly used techniques include minimization, such as Basis
Pursuit (BP) and greedy pursuit algorithms such as Orthogonal Matching Pursuit
(OMP) and Subspace Pursuit (SP). This manuscript proposes a novel semi-greedy
recovery approach, namely A* Orthogonal Matching Pursuit (A*OMP). A*OMP
performs A* search to look for the sparsest solution on a tree whose paths grow
similar to the Orthogonal Matching Pursuit (OMP) algorithm. Paths on the tree
are evaluated according to a cost function, which should compensate for
different path lengths. For this purpose, three different auxiliary structures
are defined, including novel dynamic ones. A*OMP also incorporates pruning
techniques which enable practical applications of the algorithm. Moreover, the
adjustable search parameters provide means for a complexity-accuracy trade-off.
We demonstrate the reconstruction ability of the proposed scheme on both
synthetically generated data and images using Gaussian and Bernoulli
observation matrices, where A*OMP yields less reconstruction error and higher
exact recovery frequency than BP, OMP and SP. Results also indicate that novel
dynamic cost functions provide improved results as compared to a conventional
choice.Comment: accepted for publication in Digital Signal Processin
Identification of Matrices Having a Sparse Representation
We consider the problem of recovering a matrix from its action on a known vector in the setting where the matrix can be represented efficiently in a known matrix dictionary. Connections with sparse signal recovery allows for the use of efficient reconstruction techniques such as Basis Pursuit (BP). Of particular interest is the dictionary of time-frequency shift matrices and its role for channel estimation and identification in communications engineering. We present recovery results for BP with the time-frequency shift dictionary and various dictionaries of random matrices
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …