832 research outputs found
Modulated Unit-Norm Tight Frames for Compressed Sensing
In this paper, we propose a compressed sensing (CS) framework that consists
of three parts: a unit-norm tight frame (UTF), a random diagonal matrix and a
column-wise orthonormal matrix. We prove that this structure satisfies the
restricted isometry property (RIP) with high probability if the number of
measurements for -sparse signals of length
and if the column-wise orthonormal matrix is bounded. Some existing structured
sensing models can be studied under this framework, which then gives tighter
bounds on the required number of measurements to satisfy the RIP. More
importantly, we propose several structured sensing models by appealing to this
unified framework, such as a general sensing model with arbitrary/determinisic
subsamplers, a fast and efficient block compressed sensing scheme, and
structured sensing matrices with deterministic phase modulations, all of which
can lead to improvements on practical applications. In particular, one of the
constructions is applied to simplify the transceiver design of CS-based channel
estimation for orthogonal frequency division multiplexing (OFDM) systems.Comment: submitted to IEEE Transactions on Signal Processin
Conditioning of Random Block Subdictionaries with Applications to Block-Sparse Recovery and Regression
The linear model, in which a set of observations is assumed to be given by a
linear combination of columns of a matrix, has long been the mainstay of the
statistics and signal processing literature. One particular challenge for
inference under linear models is understanding the conditions on the dictionary
under which reliable inference is possible. This challenge has attracted
renewed attention in recent years since many modern inference problems deal
with the "underdetermined" setting, in which the number of observations is much
smaller than the number of columns in the dictionary. This paper makes several
contributions for this setting when the set of observations is given by a
linear combination of a small number of groups of columns of the dictionary,
termed the "block-sparse" case. First, it specifies conditions on the
dictionary under which most block subdictionaries are well conditioned. This
result is fundamentally different from prior work on block-sparse inference
because (i) it provides conditions that can be explicitly computed in
polynomial time, (ii) the given conditions translate into near-optimal scaling
of the number of columns of the block subdictionaries as a function of the
number of observations for a large class of dictionaries, and (iii) it suggests
that the spectral norm and the quadratic-mean block coherence of the dictionary
(rather than the worst-case coherences) fundamentally limit the scaling of
dimensions of the well-conditioned block subdictionaries. Second, this paper
investigates the problems of block-sparse recovery and block-sparse regression
in underdetermined settings. Near-optimal block-sparse recovery and regression
are possible for certain dictionaries as long as the dictionary satisfies
easily computable conditions and the coefficients describing the linear
combination of groups of columns can be modeled through a mild statistical
prior.Comment: 39 pages, 3 figures. A revised and expanded version of the paper
published in IEEE Transactions on Information Theory (DOI:
10.1109/TIT.2015.2429632); this revision includes corrections in the proofs
of some of the result
The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing
This paper deals with the computational complexity of conditions which
guarantee that the NP-hard problem of finding the sparsest solution to an
underdetermined linear system can be solved by efficient algorithms. In the
literature, several such conditions have been introduced. The most well-known
ones are the mutual coherence, the restricted isometry property (RIP), and the
nullspace property (NSP). While evaluating the mutual coherence of a given
matrix is easy, it has been suspected for some time that evaluating RIP and NSP
is computationally intractable in general. We confirm these conjectures by
showing that for a given matrix A and positive integer k, computing the best
constants for which the RIP or NSP hold is, in general, NP-hard. These results
are based on the fact that determining the spark of a matrix is NP-hard, which
is also established in this paper. Furthermore, we also give several complexity
statements about problems related to the above concepts.Comment: 13 pages; accepted for publication in IEEE Trans. Inf. Theor
Compressed sensing with combinatorial designs: theory and simulations
In 'An asymptotic result on compressed sensing matrices', a new construction
for compressed sensing matrices using combinatorial design theory was
introduced. In this paper, we use deterministic and probabilistic methods to
analyse the performance of matrices obtained from this construction. We provide
new theoretical results and detailed simulations. These simulations indicate
that the construction is competitive with Gaussian random matrices, and that
recovery is tolerant to noise. A new recovery algorithm tailored to the
construction is also given.Comment: 18 pages, 3 figure
- …