61 research outputs found
Optimal Phase Transitions in Compressed Sensing
Compressed sensing deals with efficient recovery of analog signals from
linear encodings. This paper presents a statistical study of compressed sensing
by modeling the input signal as an i.i.d. process with known distribution.
Three classes of encoders are considered, namely optimal nonlinear, optimal
linear and random linear encoders. Focusing on optimal decoders, we investigate
the fundamental tradeoff between measurement rate and reconstruction fidelity
gauged by error probability and noise sensitivity in the absence and presence
of measurement noise, respectively. The optimal phase transition threshold is
determined as a functional of the input distribution and compared to suboptimal
thresholds achieved by popular reconstruction algorithms. In particular, we
show that Gaussian sensing matrices incur no penalty on the phase transition
threshold with respect to optimal nonlinear encoding. Our results also provide
a rigorous justification of previous results based on replica heuristics in the
weak-noise regime.Comment: to appear in IEEE Transactions of Information Theor
Phase Diagram and Approximate Message Passing for Blind Calibration and Dictionary Learning
We consider dictionary learning and blind calibration for signals and
matrices created from a random ensemble. We study the mean-squared error in the
limit of large signal dimension using the replica method and unveil the
appearance of phase transitions delimiting impossible, possible-but-hard and
possible inference regions. We also introduce an approximate message passing
algorithm that asymptotically matches the theoretical performance, and show
through numerical tests that it performs very well, for the calibration
problem, for tractable system sizes.Comment: 5 page
Lossless Linear Analog Compression
We establish the fundamental limits of lossless linear analog compression by
considering the recovery of random vectors
from the noiseless linear
measurements
with
measurement matrix . Specifically,
for a random vector of arbitrary
distribution we show that can be recovered with
zero error probability from
linear measurements,
where denotes the lower
modified Minkowski dimension and the infimum is over all sets
with . This achievability statement holds for Lebesgue almost all measurement
matrices . We then show that -rectifiable random vectors---a
stochastic generalization of -sparse vectors---can be recovered with zero
error probability from linear measurements. From classical compressed
sensing theory we would expect to be necessary for successful
recovery of . Surprisingly, certain classes of
-rectifiable random vectors can be recovered from fewer than
measurements. Imposing an additional regularity condition on the distribution
of -rectifiable random vectors , we do get the
expected converse result of measurements being necessary. The resulting
class of random vectors appears to be new and will be referred to as
-analytic random vectors
Replica Symmetry Breaking in Compressive Sensing
For noisy compressive sensing systems, the asymptotic distortion with respect
to an arbitrary distortion function is determined when a general class of
least-square based reconstruction schemes is employed. The sampling matrix is
considered to belong to a large ensemble of random matrices including i.i.d.
and projector matrices, and the source vector is assumed to be i.i.d. with a
desired distribution. We take a statistical mechanical approach by representing
the asymptotic distortion as a macroscopic parameter of a spin glass and
employing the replica method for the large-system analysis. In contrast to
earlier studies, we evaluate the general replica ansatz which includes the RS
ansatz as well as RSB. The generality of the solution enables us to study the
impact of symmetry breaking. Our numerical investigations depict that for the
reconstruction scheme with the "zero-norm" penalty function, the RS fails to
predict the asymptotic distortion for relatively large compression rates;
however, the one-step RSB ansatz gives a valid prediction of the performance
within a larger regime of compression rates.Comment: 7 pages, 3 figures, presented at ITA 201
Efficient high-dimensional entanglement imaging with a compressive sensing, double-pixel camera
We implement a double-pixel, compressive sensing camera to efficiently
characterize, at high resolution, the spatially entangled fields produced by
spontaneous parametric downconversion. This technique leverages sparsity in
spatial correlations between entangled photons to improve acquisition times
over raster-scanning by a scaling factor up to n^2/log(n) for n-dimensional
images. We image at resolutions up to 1024 dimensions per detector and
demonstrate a channel capacity of 8.4 bits per photon. By comparing the
classical mutual information in conjugate bases, we violate an entropic
Einstein-Podolsky-Rosen separability criterion for all measured resolutions.
More broadly, our result indicates compressive sensing can be especially
effective for higher-order measurements on correlated systems.Comment: 10 pages, 7 figure
Measure What Should be Measured: Progress and Challenges in Compressive Sensing
Is compressive sensing overrated? Or can it live up to our expectations? What
will come after compressive sensing and sparsity? And what has Galileo Galilei
got to do with it? Compressive sensing has taken the signal processing
community by storm. A large corpus of research devoted to the theory and
numerics of compressive sensing has been published in the last few years.
Moreover, compressive sensing has inspired and initiated intriguing new
research directions, such as matrix completion. Potential new applications
emerge at a dazzling rate. Yet some important theoretical questions remain
open, and seemingly obvious applications keep escaping the grip of compressive
sensing. In this paper I discuss some of the recent progress in compressive
sensing and point out key challenges and opportunities as the area of
compressive sensing and sparse representations keeps evolving. I also attempt
to assess the long-term impact of compressive sensing
Compressed Sensing of Approximately-Sparse Signals: Phase Transitions and Optimal Reconstruction
Compressed sensing is designed to measure sparse signals directly in a
compressed form. However, most signals of interest are only "approximately
sparse", i.e. even though the signal contains only a small fraction of relevant
(large) components the other components are not strictly equal to zero, but are
only close to zero. In this paper we model the approximately sparse signal with
a Gaussian distribution of small components, and we study its compressed
sensing with dense random matrices. We use replica calculations to determine
the mean-squared error of the Bayes-optimal reconstruction for such signals, as
a function of the variance of the small components, the density of large
components and the measurement rate. We then use the G-AMP algorithm and we
quantify the region of parameters for which this algorithm achieves optimality
(for large systems). Finally, we show that in the region where the GAMP for the
homogeneous measurement matrices is not optimal, a special "seeding" design of
a spatially-coupled measurement matrix allows to restore optimality.Comment: 8 pages, 10 figure
Multi Terminal Probabilistic Compressed Sensing
In this paper, the `Approximate Message Passing' (AMP) algorithm, initially
developed for compressed sensing of signals under i.i.d. Gaussian measurement
matrices, has been extended to a multi-terminal setting (MAMP algorithm). It
has been shown that similar to its single terminal counterpart, the behavior of
MAMP algorithm is fully characterized by a `State Evolution' (SE) equation for
large block-lengths. This equation has been used to obtain the rate-distortion
curve of a multi-terminal memoryless source. It is observed that by spatially
coupling the measurement matrices, the rate-distortion curve of MAMP algorithm
undergoes a phase transition, where the measurement rate region corresponding
to a low distortion (approximately zero distortion) regime is fully
characterized by the joint and conditional Renyi information dimension (RID) of
the multi-terminal source. This measurement rate region is very similar to the
rate region of the Slepian-Wolf distributed source coding problem where the RID
plays a role similar to the discrete entropy.
Simulations have been done to investigate the empirical behavior of MAMP
algorithm. It is observed that simulation results match very well with
predictions of SE equation for reasonably large block-lengths.Comment: 11 pages, 13 figures. arXiv admin note: text overlap with
arXiv:1112.0708 by other author
Isotropically Random Orthogonal Matrices: Performance of LASSO and Minimum Conic Singular Values
Recently, the precise performance of the Generalized LASSO algorithm for
recovering structured signals from compressed noisy measurements, obtained via
i.i.d. Gaussian matrices, has been characterized. The analysis is based on a
framework introduced by Stojnic and heavily relies on the use of Gordon's
Gaussian min-max theorem (GMT), a comparison principle on Gaussian processes.
As a result, corresponding characterizations for other ensembles of measurement
matrices have not been developed. In this work, we analyze the corresponding
performance of the ensemble of isotropically random orthogonal (i.r.o.)
measurements. We consider the constrained version of the Generalized LASSO and
derive a sharp characterization of its normalized squared error in the
large-system limit. When compared to its Gaussian counterpart, our result
analytically confirms the superiority in performance of the i.r.o. ensemble.
Our second result, derives an asymptotic lower bound on the minimum conic
singular values of i.r.o. matrices. This bound is larger than the corresponding
bound on Gaussian matrices. To prove our results we express i.r.o. matrices in
terms of Gaussians and show that, with some modifications, the GMT framework is
still applicable
- âŠ