3,264 research outputs found
The road to deterministic matrices with the restricted isometry property
The restricted isometry property (RIP) is a well-known matrix condition that
provides state-of-the-art reconstruction guarantees for compressed sensing.
While random matrices are known to satisfy this property with high probability,
deterministic constructions have found less success. In this paper, we consider
various techniques for demonstrating RIP deterministically, some popular and
some novel, and we evaluate their performance. In evaluating some techniques,
we apply random matrix theory and inadvertently find a simple alternative proof
that certain random matrices are RIP. Later, we propose a particular class of
matrices as candidates for being RIP, namely, equiangular tight frames (ETFs).
Using the known correspondence between real ETFs and strongly regular graphs,
we investigate certain combinatorial implications of a real ETF being RIP.
Specifically, we give probabilistic intuition for a new bound on the clique
number of Paley graphs of prime order, and we conjecture that the corresponding
ETFs are RIP in a manner similar to random matrices.Comment: 24 page
Sparse Recovery from Combined Fusion Frame Measurements
Sparse representations have emerged as a powerful tool in signal and
information processing, culminated by the success of new acquisition and
processing techniques such as Compressed Sensing (CS). Fusion frames are very
rich new signal representation methods that use collections of subspaces
instead of vectors to represent signals. This work combines these exciting
fields to introduce a new sparsity model for fusion frames. Signals that are
sparse under the new model can be compressively sampled and uniquely
reconstructed in ways similar to sparse signals using standard CS. The
combination provides a promising new set of mathematical tools and signal
models useful in a variety of applications. With the new model, a sparse signal
has energy in very few of the subspaces of the fusion frame, although it does
not need to be sparse within each of the subspaces it occupies. This sparsity
model is captured using a mixed l1/l2 norm for fusion frames.
A signal sparse in a fusion frame can be sampled using very few random
projections and exactly reconstructed using a convex optimization that
minimizes this mixed l1/l2 norm. The provided sampling conditions generalize
coherence and RIP conditions used in standard CS theory. It is demonstrated
that they are sufficient to guarantee sparse recovery of any signal sparse in
our model. Moreover, a probabilistic analysis is provided using a stochastic
model on the sparse signal that shows that under very mild conditions the
probability of recovery failure decays exponentially with increasing dimension
of the subspaces
Deterministic Constructions of Binary Measurement Matrices from Finite Geometry
Deterministic constructions of measurement matrices in compressed sensing
(CS) are considered in this paper. The constructions are inspired by the recent
discovery of Dimakis, Smarandache and Vontobel which says that parity-check
matrices of good low-density parity-check (LDPC) codes can be used as
{provably} good measurement matrices for compressed sensing under
-minimization. The performance of the proposed binary measurement
matrices is mainly theoretically analyzed with the help of the analyzing
methods and results from (finite geometry) LDPC codes. Particularly, several
lower bounds of the spark (i.e., the smallest number of columns that are
linearly dependent, which totally characterizes the recovery performance of
-minimization) of general binary matrices and finite geometry matrices
are obtained and they improve the previously known results in most cases.
Simulation results show that the proposed matrices perform comparably to,
sometimes even better than, the corresponding Gaussian random matrices.
Moreover, the proposed matrices are sparse, binary, and most of them have
cyclic or quasi-cyclic structure, which will make the hardware realization
convenient and easy.Comment: 12 pages, 11 figure
Explicit measurements with almost optimal thresholds for compressed sensing
We consider the deterministic construction of a measurement
matrix and a recovery method for signals that are block
sparse. A signal that has dimension N = nd, which consists
of n blocks of size d, is called (s, d)-block sparse if
only s blocks out of n are nonzero. We construct an explicit
linear mapping ÎŚ that maps the (s, d)-block sparse signal
to a measurement vector of dimension M, where sâ˘d <N(1-(1-M/N)^(d/(d+1))-o(1).
We show that if the (s, d)-
block sparse signal is chosen uniformly at random then the
signal can almost surely be reconstructed from the measurement
vector in O(N^3) computations
Task-Driven Adaptive Statistical Compressive Sensing of Gaussian Mixture Models
A framework for adaptive and non-adaptive statistical compressive sensing is
developed, where a statistical model replaces the standard sparsity model of
classical compressive sensing. We propose within this framework optimal
task-specific sensing protocols specifically and jointly designed for
classification and reconstruction. A two-step adaptive sensing paradigm is
developed, where online sensing is applied to detect the signal class in the
first step, followed by a reconstruction step adapted to the detected class and
the observed samples. The approach is based on information theory, here
tailored for Gaussian mixture models (GMMs), where an information-theoretic
objective relationship between the sensed signals and a representation of the
specific task of interest is maximized. Experimental results using synthetic
signals, Landsat satellite attributes, and natural images of different sizes
and with different noise levels show the improvements achieved using the
proposed framework when compared to more standard sensing protocols. The
underlying formulation can be applied beyond GMMs, at the price of higher
mathematical and computational complexity
- âŚ