24 research outputs found
Joint Sparsity with Different Measurement Matrices
We consider a generalization of the multiple measurement vector (MMV)
problem, where the measurement matrices are allowed to differ across
measurements. This problem arises naturally when multiple measurements are
taken over time, e.g., and the measurement modality (matrix) is time-varying.
We derive probabilistic recovery guarantees showing that---under certain (mild)
conditions on the measurement matrices---l2/l1-norm minimization and a variant
of orthogonal matching pursuit fail with a probability that decays
exponentially in the number of measurements. This allows us to conclude that,
perhaps surprisingly, recovery performance does not suffer from the individual
measurements being taken through different measurement matrices. What is more,
recovery performance typically benefits (significantly) from diversity in the
measurement matrices; we specify conditions under which such improvements are
obtained. These results continue to hold when the measurements are subject to
(bounded) noise.Comment: Allerton 201
The Sparsity Gap: Uncertainty Principles Proportional to Dimension
In an incoherent dictionary, most signals that admit a sparse representation
admit a unique sparse representation. In other words, there is no way to
express the signal without using strictly more atoms. This work demonstrates
that sparse signals typically enjoy a higher privilege: each nonoptimal
representation of the signal requires far more atoms than the sparsest
representation-unless it contains many of the same atoms as the sparsest
representation. One impact of this finding is to confer a certain degree of
legitimacy on the particular atoms that appear in a sparse representation. This
result can also be viewed as an uncertainty principle for random sparse signals
over an incoherent dictionary.Comment: 6 pages. To appear in the Proceedings of the 44th Ann. IEEE Conf. on
Information Sciences and System
Conditioning of Random Block Subdictionaries with Applications to Block-Sparse Recovery and Regression
The linear model, in which a set of observations is assumed to be given by a
linear combination of columns of a matrix, has long been the mainstay of the
statistics and signal processing literature. One particular challenge for
inference under linear models is understanding the conditions on the dictionary
under which reliable inference is possible. This challenge has attracted
renewed attention in recent years since many modern inference problems deal
with the "underdetermined" setting, in which the number of observations is much
smaller than the number of columns in the dictionary. This paper makes several
contributions for this setting when the set of observations is given by a
linear combination of a small number of groups of columns of the dictionary,
termed the "block-sparse" case. First, it specifies conditions on the
dictionary under which most block subdictionaries are well conditioned. This
result is fundamentally different from prior work on block-sparse inference
because (i) it provides conditions that can be explicitly computed in
polynomial time, (ii) the given conditions translate into near-optimal scaling
of the number of columns of the block subdictionaries as a function of the
number of observations for a large class of dictionaries, and (iii) it suggests
that the spectral norm and the quadratic-mean block coherence of the dictionary
(rather than the worst-case coherences) fundamentally limit the scaling of
dimensions of the well-conditioned block subdictionaries. Second, this paper
investigates the problems of block-sparse recovery and block-sparse regression
in underdetermined settings. Near-optimal block-sparse recovery and regression
are possible for certain dictionaries as long as the dictionary satisfies
easily computable conditions and the coefficients describing the linear
combination of groups of columns can be modeled through a mild statistical
prior.Comment: 39 pages, 3 figures. A revised and expanded version of the paper
published in IEEE Transactions on Information Theory (DOI:
10.1109/TIT.2015.2429632); this revision includes corrections in the proofs
of some of the result
A unified approach to model selection and sparse recovery using regularized least squares
Model selection and sparse recovery are two important problems for which many
regularization methods have been proposed. We study the properties of
regularization methods in both problems under the unified framework of
regularized least squares with concave penalties. For model selection, we
establish conditions under which a regularized least squares estimator enjoys a
nonasymptotic property, called the weak oracle property, where the
dimensionality can grow exponentially with sample size. For sparse recovery, we
present a sufficient condition that ensures the recoverability of the sparsest
solution. In particular, we approach both problems by considering a family of
penalties that give a smooth homotopy between and penalties. We
also propose the sequentially and iteratively reweighted squares (SIRS)
algorithm for sparse recovery. Numerical studies support our theoretical
results and demonstrate the advantage of our new methods for model selection
and sparse recovery.Comment: Published in at http://dx.doi.org/10.1214/09-AOS683 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org