13,620 research outputs found
Optimal Phase Transitions in Compressed Sensing
Compressed sensing deals with efficient recovery of analog signals from
linear encodings. This paper presents a statistical study of compressed sensing
by modeling the input signal as an i.i.d. process with known distribution.
Three classes of encoders are considered, namely optimal nonlinear, optimal
linear and random linear encoders. Focusing on optimal decoders, we investigate
the fundamental tradeoff between measurement rate and reconstruction fidelity
gauged by error probability and noise sensitivity in the absence and presence
of measurement noise, respectively. The optimal phase transition threshold is
determined as a functional of the input distribution and compared to suboptimal
thresholds achieved by popular reconstruction algorithms. In particular, we
show that Gaussian sensing matrices incur no penalty on the phase transition
threshold with respect to optimal nonlinear encoding. Our results also provide
a rigorous justification of previous results based on replica heuristics in the
weak-noise regime.Comment: to appear in IEEE Transactions of Information Theor
Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions
In this paper, we study the problem of compressed sensing using binary
measurement matrices and -norm minimization (basis pursuit) as the
recovery algorithm. We derive new upper and lower bounds on the number of
measurements to achieve robust sparse recovery with binary matrices. We
establish sufficient conditions for a column-regular binary matrix to satisfy
the robust null space property (RNSP) and show that the associated sufficient
conditions % sparsity bounds for robust sparse recovery obtained using the RNSP
are better by a factor of compared to the
sufficient conditions obtained using the restricted isometry property (RIP).
Next we derive universal \textit{lower} bounds on the number of measurements
that any binary matrix needs to have in order to satisfy the weaker sufficient
condition based on the RNSP and show that bipartite graphs of girth six are
optimal. Then we display two classes of binary matrices, namely parity check
matrices of array codes and Euler squares, which have girth six and are nearly
optimal in the sense of almost satisfying the lower bound. In principle,
randomly generated Gaussian measurement matrices are "order-optimal". So we
compare the phase transition behavior of the basis pursuit formulation using
binary array codes and Gaussian matrices and show that (i) there is essentially
no difference between the phase transition boundaries in the two cases and (ii)
the CPU time of basis pursuit with binary matrices is hundreds of times faster
than with Gaussian matrices and the storage requirements are less. Therefore it
is suggested that binary matrices are a viable alternative to Gaussian matrices
for compressed sensing using basis pursuit. \end{abstract}Comment: 28 pages, 3 figures, 5 table
Compressed Sensing of Approximately-Sparse Signals: Phase Transitions and Optimal Reconstruction
Compressed sensing is designed to measure sparse signals directly in a
compressed form. However, most signals of interest are only "approximately
sparse", i.e. even though the signal contains only a small fraction of relevant
(large) components the other components are not strictly equal to zero, but are
only close to zero. In this paper we model the approximately sparse signal with
a Gaussian distribution of small components, and we study its compressed
sensing with dense random matrices. We use replica calculations to determine
the mean-squared error of the Bayes-optimal reconstruction for such signals, as
a function of the variance of the small components, the density of large
components and the measurement rate. We then use the G-AMP algorithm and we
quantify the region of parameters for which this algorithm achieves optimality
(for large systems). Finally, we show that in the region where the GAMP for the
homogeneous measurement matrices is not optimal, a special "seeding" design of
a spatially-coupled measurement matrix allows to restore optimality.Comment: 8 pages, 10 figure
Expander -Decoding
We introduce two new algorithms, Serial- and Parallel- for
solving a large underdetermined linear system of equations when it is known that has at most
nonzero entries and that is the adjacency matrix of an unbalanced left
-regular expander graph. The matrices in this class are sparse and allow a
highly efficient implementation. A number of algorithms have been designed to
work exclusively under this setting, composing the branch of combinatorial
compressed-sensing (CCS).
Serial- and Parallel- iteratively minimise by successfully combining two desirable features of previous CCS
algorithms: the information-preserving strategy of ER, and the parallel
updating mechanism of SMP. We are able to link these elements and guarantee
convergence in operations by assuming that the signal
is dissociated, meaning that all of the subset sums of the support of
are pairwise different. However, we observe empirically that the signal need
not be exactly dissociated in practice. Moreover, we observe Serial-
and Parallel- to be able to solve large scale problems with a larger
fraction of nonzeros than other algorithms when the number of measurements is
substantially less than the signal length; in particular, they are able to
reliably solve for a -sparse vector from expander
measurements with and up to four times greater than what is
achievable by -regularization from dense Gaussian measurements.
Additionally, Serial- and Parallel- are observed to be able to
solve large problems sizes in substantially less time than other algorithms for
compressed sensing. In particular, Parallel- is structured to take
advantage of massively parallel architectures.Comment: 14 pages, 10 figure
Probabilistic Reconstruction in Compressed Sensing: Algorithms, Phase Diagrams, and Threshold Achieving Matrices
Compressed sensing is a signal processing method that acquires data directly
in a compressed form. This allows one to make less measurements than what was
considered necessary to record a signal, enabling faster or more precise
measurement protocols in a wide range of applications. Using an
interdisciplinary approach, we have recently proposed in [arXiv:1109.4424] a
strategy that allows compressed sensing to be performed at acquisition rates
approaching to the theoretical optimal limits. In this paper, we give a more
thorough presentation of our approach, and introduce many new results. We
present the probabilistic approach to reconstruction and discuss its optimality
and robustness. We detail the derivation of the message passing algorithm for
reconstruction and expectation max- imization learning of signal-model
parameters. We further develop the asymptotic analysis of the corresponding
phase diagrams with and without measurement noise, for different distribution
of signals, and discuss the best possible reconstruction performances
regardless of the algorithm. We also present new efficient seeding matrices,
test them on synthetic data and analyze their performance asymptotically.Comment: 42 pages, 37 figures, 3 appendixe
Improving A*OMP: Theoretical and Empirical Analyses With a Novel Dynamic Cost Model
Best-first search has been recently utilized for compressed sensing (CS) by
the A* orthogonal matching pursuit (A*OMP) algorithm. In this work, we
concentrate on theoretical and empirical analyses of A*OMP. We present a
restricted isometry property (RIP) based general condition for exact recovery
of sparse signals via A*OMP. In addition, we develop online guarantees which
promise improved recovery performance with the residue-based termination
instead of the sparsity-based one. We demonstrate the recovery capabilities of
A*OMP with extensive recovery simulations using the adaptive-multiplicative
(AMul) cost model, which effectively compensates for the path length
differences in the search tree. The presented results, involving phase
transitions for different nonzero element distributions as well as recovery
rates and average error, reveal not only the superior recovery accuracy of
A*OMP, but also the improvements with the residue-based termination and the
AMul cost model. Comparison of the run times indicate the speed up by the AMul
cost model. We also demonstrate a hybrid of OMP and A?OMP to accelerate the
search further. Finally, we run A*OMP on a sparse image to illustrate its
recovery performance for more realistic coefcient distributions
- …