5,567 research outputs found
Universal Compressed Sensing
In this paper, the problem of developing universal algorithms for compressed
sensing of stochastic processes is studied. First, R\'enyi's notion of
information dimension (ID) is generalized to analog stationary processes. This
provides a measure of complexity for such processes and is connected to the
number of measurements required for their accurate recovery. Then a minimum
entropy pursuit (MEP) optimization approach is proposed, and it is proven that
it can reliably recover any stationary process satisfying some mixing
constraints from sufficient number of randomized linear measurements, without
having any prior information about the distribution of the process. It is
proved that a Lagrangian-type approximation of the MEP optimization problem,
referred to as Lagrangian-MEP problem, is identical to a heuristic
implementable algorithm proposed by Baron et al. It is shown that for the right
choice of parameters the Lagrangian-MEP algorithm, in addition to having the
same asymptotic performance as MEP optimization, is also robust to the
measurement noise. For memoryless sources with a discrete-continuous mixture
distribution, the fundamental limits of the minimum number of required
measurements by a non-universal compressed sensing decoder is characterized by
Wu et al. For such sources, it is proved that there is no loss in universal
coding, and both the MEP and the Lagrangian-MEP asymptotically achieve the
optimal performance
Compression-Based Compressed Sensing
Modern compression algorithms exploit complex structures that are present in
signals to describe them very efficiently. On the other hand, the field of
compressed sensing is built upon the observation that "structured" signals can
be recovered from their under-determined set of linear projections. Currently,
there is a large gap between the complexity of the structures studied in the
area of compressed sensing and those employed by the state-of-the-art
compression codes. Recent results in the literature on deterministic signals
aim at bridging this gap through devising compressed sensing decoders that
employ compression codes. This paper focuses on structured stochastic processes
and studies the application of rate-distortion codes to compressed sensing of
such signals. The performance of the formerly-proposed compressible signal
pursuit (CSP) algorithm is studied in this stochastic setting. It is proved
that in the very low distortion regime, as the blocklength grows to infinity,
the CSP algorithm reliably and robustly recovers instances of a stationary
process from random linear projections as long as their count is slightly more
than times the rate-distortion dimension (RDD) of the source. It is also
shown that under some regularity conditions, the RDD of a stationary process is
equal to its information dimension (ID). This connection establishes the
optimality of the CSP algorithm at least for memoryless stationary sources, for
which the fundamental limits are known. Finally, it is shown that the CSP
algorithm combined by a family of universal variable-length fixed-distortion
compression codes yields a family of universal compressed sensing recovery
algorithms
Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation
We study the compressed sensing (CS) signal estimation problem where an input
signal is measured via a linear matrix multiplication under additive noise.
While this setup usually assumes sparsity or compressibility in the input
signal during recovery, the signal structure that can be leveraged is often not
known a priori. In this paper, we consider universal CS recovery, where the
statistics of a stationary ergodic signal source are estimated simultaneously
with the signal itself. Inspired by Kolmogorov complexity and minimum
description length, we focus on a maximum a posteriori (MAP) estimation
framework that leverages universal priors to match the complexity of the
source. Our framework can also be applied to general linear inverse problems
where more measurements than in CS might be needed. We provide theoretical
results that support the algorithmic feasibility of universal MAP estimation
using a Markov chain Monte Carlo implementation, which is computationally
challenging. We incorporate some techniques to accelerate the algorithm while
providing comparable and in many cases better reconstruction quality than
existing algorithms. Experimental results show the promise of universality in
CS, particularly for low-complexity sources that do not exhibit standard
sparsity or compressibility.Comment: 29 pages, 8 figure
Sequential Compressed Sensing
Compressed sensing allows perfect recovery of sparse signals (or signals
sparse in some basis) using only a small number of random measurements.
Existing results in compressed sensing literature have focused on
characterizing the achievable performance by bounding the number of samples
required for a given level of signal sparsity. However, using these bounds to
minimize the number of samples requires a-priori knowledge of the sparsity of
the unknown signal, or the decay structure for near-sparse signals.
Furthermore, there are some popular recovery methods for which no such bounds
are known.
In this paper, we investigate an alternative scenario where observations are
available in sequence. For any recovery method, this means that there is now a
sequence of candidate reconstructions. We propose a method to estimate the
reconstruction error directly from the samples themselves, for every candidate
in this sequence. This estimate is universal in the sense that it is based only
on the measurement ensemble, and not on the recovery method or any assumed
level of sparsity of the unknown signal. With these estimates, one can now stop
observations as soon as there is reasonable certainty of either exact or
sufficiently accurate reconstruction. They also provide a way to obtain
"run-time" guarantees for recovery methods that otherwise lack a-priori
performance bounds.
We investigate both continuous (e.g. Gaussian) and discrete (e.g. Bernoulli)
random measurement ensembles, both for exactly sparse and general near-sparse
signals, and with both noisy and noiseless measurements.Comment: to appear in IEEE transactions on Special Topics in Signal Processin
Deterministic Construction of Binary, Bipolar and Ternary Compressed Sensing Matrices
In this paper we establish the connection between the Orthogonal Optical
Codes (OOC) and binary compressed sensing matrices. We also introduce
deterministic bipolar RIP fulfilling matrices of order
such that . The columns of these matrices are binary BCH code vectors where the
zeros are replaced by -1. Since the RIP is established by means of coherence,
the simple greedy algorithms such as Matching Pursuit are able to recover the
sparse solution from the noiseless samples. Due to the cyclic property of the
BCH codes, we show that the FFT algorithm can be employed in the reconstruction
methods to considerably reduce the computational complexity. In addition, we
combine the binary and bipolar matrices to form ternary sensing matrices
( elements) that satisfy the RIP condition.Comment: The paper is accepted for publication in IEEE Transaction on
Information Theor
- …