927 research outputs found
Recovery of Sparse Signals Using Multiple Orthogonal Least Squares
We study the problem of recovering sparse signals from compressed linear
measurements. This problem, often referred to as sparse recovery or sparse
reconstruction, has generated a great deal of interest in recent years. To
recover the sparse signals, we propose a new method called multiple orthogonal
least squares (MOLS), which extends the well-known orthogonal least squares
(OLS) algorithm by allowing multiple indices to be chosen per iteration.
Owing to inclusion of multiple support indices in each selection, the MOLS
algorithm converges in much fewer iterations and improves the computational
efficiency over the conventional OLS algorithm. Theoretical analysis shows that
MOLS () performs exact recovery of all -sparse signals within
iterations if the measurement matrix satisfies the restricted isometry property
(RIP) with isometry constant The recovery performance of MOLS in the noisy scenario is also
studied. It is shown that stable recovery of sparse signals can be achieved
with the MOLS algorithm when the signal-to-noise ratio (SNR) scales linearly
with the sparsity level of input signals
Subspace Methods for Joint Sparse Recovery
We propose robust and efficient algorithms for the joint sparse recovery
problem in compressed sensing, which simultaneously recover the supports of
jointly sparse signals from their multiple measurement vectors obtained through
a common sensing matrix. In a favorable situation, the unknown matrix, which
consists of the jointly sparse signals, has linearly independent nonzero rows.
In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally
proposed by Schmidt for the direction of arrival problem in sensor array
processing and later proposed and analyzed for joint sparse recovery by Feng
and Bresler, provides a guarantee with the minimum number of measurements. We
focus instead on the unfavorable but practically significant case of
rank-defect or ill-conditioning. This situation arises with limited number of
measurement vectors, or with highly correlated signal components. In this case
MUSIC fails, and in practice none of the existing methods can consistently
approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC),
which improves on MUSIC so that the support is reliably recovered under such
unfavorable conditions. Combined with subspace-based greedy algorithms also
proposed and analyzed in this paper, SA-MUSIC provides a computationally
efficient algorithm with a performance guarantee. The performance guarantees
are given in terms of a version of restricted isometry property. In particular,
we also present a non-asymptotic perturbation analysis of the signal subspace
estimation that has been missing in the previous study of MUSIC.Comment: submitted to IEEE transactions on Information Theory, revised versio
Topics in Compressed Sensing
Compressed sensing has a wide range of applications that include error correction, imaging, radar and many more. Given a sparse signal in a high dimensional space, one wishes to reconstruct that signal accurately and efficiently from a number of linear measurements much less than its actual dimension. Although in theory it is clear that this is possible, the difficulty lies in the construction of algorithms that perform the recovery efficiently, as well as determining which kind of linear measurements allow for the reconstruction. There have been two distinct major approaches to sparse recovery that each present different benefits and shortcomings. The first, L1-minimization methods such as Basis Pursuit, use a linear optimization problem to recover the signal. This method provides strong guarantees and stability, but relies on Linear Programming, whose methods do not yet have strong polynomially bounded runtimes. The second approach uses greedy methods that compute the support of the signal iteratively. These methods are usually much faster than Basis Pursuit, but until recently had not been able to provide the same guarantees. This gap between the two approaches was bridged when we developed and analyzed the greedy algorithm Regularized Orthogonal Matching Pursuit (ROMP). ROMP provides similar guarantees to Basis Pursuit as well as the speed of a greedy algorithm. Our more recent algorithm Compressive Sampling Matching Pursuit (CoSaMP) improves upon these guarantees, and is optimal in every important aspect
Submodular meets Spectral: Greedy Algorithms for Subset Selection, Sparse Approximation and Dictionary Selection
We study the problem of selecting a subset of k random variables from a large
set, in order to obtain the best linear prediction of another variable of
interest. This problem can be viewed in the context of both feature selection
and sparse approximation. We analyze the performance of widely used greedy
heuristics, using insights from the maximization of submodular functions and
spectral analysis. We introduce the submodularity ratio as a key quantity to
help understand why greedy algorithms perform well even when the variables are
highly correlated. Using our techniques, we obtain the strongest known
approximation guarantees for this problem, both in terms of the submodularity
ratio and the smallest k-sparse eigenvalue of the covariance matrix. We further
demonstrate the wide applicability of our techniques by analyzing greedy
algorithms for the dictionary selection problem, and significantly improve the
previously known guarantees. Our theoretical analysis is complemented by
experiments on real-world and synthetic data sets; the experiments show that
the submodularity ratio is a stronger predictor of the performance of greedy
algorithms than other spectral parameters
Relaxed Recovery Conditions for OMP/OLS by Exploiting both Coherence and Decay
We propose extended coherence-based conditions for exact sparse support
recovery using orthogonal matching pursuit (OMP) and orthogonal least squares
(OLS). Unlike standard uniform guarantees, we embed some information about the
decay of the sparse vector coefficients in our conditions. As a result, the
standard condition (where denotes the mutual coherence and
the sparsity level) can be weakened as soon as the non-zero coefficients
obey some decay, both in the noiseless and the bounded-noise scenarios.
Furthermore, the resulting condition is approaching for strongly
decaying sparse signals. Finally, in the noiseless setting, we prove that the
proposed conditions, in particular the bound , are the tightest
achievable guarantees based on mutual coherence
Optimal approximate matrix product in terms of stable rank
We prove, using the subspace embedding guarantee in a black box way, that one
can achieve the spectral norm guarantee for approximate matrix multiplication
with a dimensionality-reducing map having
rows. Here is the maximum stable rank, i.e. squared ratio of
Frobenius and operator norms, of the two matrices being multiplied. This is a
quantitative improvement over previous work of [MZ11, KVZ14], and is also
optimal for any oblivious dimensionality-reducing map. Furthermore, due to the
black box reliance on the subspace embedding property in our proofs, our
theorem can be applied to a much more general class of sketching matrices than
what was known before, in addition to achieving better bounds. For example, one
can apply our theorem to efficient subspace embeddings such as the Subsampled
Randomized Hadamard Transform or sparse subspace embeddings, or even with
subspace embedding constructions that may be developed in the future.
Our main theorem, via connections with spectral error matrix multiplication
shown in prior work, implies quantitative improvements for approximate least
squares regression and low rank approximation. Our main result has also already
been applied to improve dimensionality reduction guarantees for -means
clustering [CEMMP14], and implies new results for nonparametric regression
[YPW15].
We also separately point out that the proof of the "BSS" deterministic
row-sampling result of [BSS12] can be modified to show that for any matrices
of stable rank at most , one can achieve the spectral norm
guarantee for approximate matrix multiplication of by deterministically
sampling rows that can be found in polynomial
time. The original result of [BSS12] was for rank instead of stable rank. Our
observation leads to a stronger version of a main theorem of [KMST10].Comment: v3: minor edits; v2: fixed one step in proof of Theorem 9 which was
wrong by a constant factor (see the new Lemma 5 and its use; final theorem
unaffected
- âŠ