112 research outputs found
Subspace Methods for Joint Sparse Recovery
We propose robust and efficient algorithms for the joint sparse recovery
problem in compressed sensing, which simultaneously recover the supports of
jointly sparse signals from their multiple measurement vectors obtained through
a common sensing matrix. In a favorable situation, the unknown matrix, which
consists of the jointly sparse signals, has linearly independent nonzero rows.
In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally
proposed by Schmidt for the direction of arrival problem in sensor array
processing and later proposed and analyzed for joint sparse recovery by Feng
and Bresler, provides a guarantee with the minimum number of measurements. We
focus instead on the unfavorable but practically significant case of
rank-defect or ill-conditioning. This situation arises with limited number of
measurement vectors, or with highly correlated signal components. In this case
MUSIC fails, and in practice none of the existing methods can consistently
approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC),
which improves on MUSIC so that the support is reliably recovered under such
unfavorable conditions. Combined with subspace-based greedy algorithms also
proposed and analyzed in this paper, SA-MUSIC provides a computationally
efficient algorithm with a performance guarantee. The performance guarantees
are given in terms of a version of restricted isometry property. In particular,
we also present a non-asymptotic perturbation analysis of the signal subspace
estimation that has been missing in the previous study of MUSIC.Comment: submitted to IEEE transactions on Information Theory, revised versio
Greedy-Like Algorithms for the Cosparse Analysis Model
The cosparse analysis model has been introduced recently as an interesting
alternative to the standard sparse synthesis approach. A prominent question
brought up by this new construction is the analysis pursuit problem -- the need
to find a signal belonging to this model, given a set of corrupted measurements
of it. Several pursuit methods have already been proposed based on
relaxation and a greedy approach. In this work we pursue this question further,
and propose a new family of pursuit algorithms for the cosparse analysis model,
mimicking the greedy-like methods -- compressive sampling matching pursuit
(CoSaMP), subspace pursuit (SP), iterative hard thresholding (IHT) and hard
thresholding pursuit (HTP). Assuming the availability of a near optimal
projection scheme that finds the nearest cosparse subspace to any vector, we
provide performance guarantees for these algorithms. Our theoretical study
relies on a restricted isometry property adapted to the context of the cosparse
analysis model. We explore empirically the performance of these algorithms by
adopting a plain thresholding projection, demonstrating their good performance
On the Power of Preconditioning in Sparse Linear Regression
Sparse linear regression is a fundamental problem in high-dimensional
statistics, but strikingly little is known about how to efficiently solve it
without restrictive conditions on the design matrix. We consider the
(correlated) random design setting, where the covariates are independently
drawn from a multivariate Gaussian with ,
and seek estimators minimizing ,
where is the -sparse ground truth. Information theoretically, one can
achieve strong error bounds with samples for arbitrary
and ; however, no efficient algorithms are known to match these guarantees
even with samples, without further assumptions on or . As
far as hardness, computational lower bounds are only known with worst-case
design matrices. Random-design instances are known which are hard for the
Lasso, but these instances can generally be solved by Lasso after a simple
change-of-basis (i.e. preconditioning).
In this work, we give upper and lower bounds clarifying the power of
preconditioning in sparse linear regression. First, we show that the
preconditioned Lasso can solve a large class of sparse linear regression
problems nearly optimally: it succeeds whenever the dependency structure of the
covariates, in the sense of the Markov property, has low treewidth -- even if
is highly ill-conditioned. Second, we construct (for the first time)
random-design instances which are provably hard for an optimally preconditioned
Lasso. In fact, we complete our treewidth classification by proving that for
any treewidth- graph, there exists a Gaussian Markov Random Field on this
graph such that the preconditioned Lasso, with any choice of preconditioner,
requires samples to recover -sparse signals when
covariates are drawn from this model.Comment: 73 pages, 5 figure
- …