13,859 research outputs found
Robust phase retrieval with the swept approximate message passing (prSAMP) algorithm
In phase retrieval, the goal is to recover a complex signal from the
magnitude of its linear measurements. While many well-known algorithms
guarantee deterministic recovery of the unknown signal using i.i.d. random
measurement matrices, they suffer serious convergence issues some
ill-conditioned matrices. As an example, this happens in optical imagers using
binary intensity-only spatial light modulators to shape the input wavefront.
The problem of ill-conditioned measurement matrices has also been a topic of
interest for compressed sensing researchers during the past decade. In this
paper, using recent advances in generic compressed sensing, we propose a new
phase retrieval algorithm that well-adopts for both Gaussian i.i.d. and binary
matrices using both sparse and dense input signals. This algorithm is also
robust to the strong noise levels found in some imaging applications
Deterministic Constructions of Binary Measurement Matrices from Finite Geometry
Deterministic constructions of measurement matrices in compressed sensing
(CS) are considered in this paper. The constructions are inspired by the recent
discovery of Dimakis, Smarandache and Vontobel which says that parity-check
matrices of good low-density parity-check (LDPC) codes can be used as
{provably} good measurement matrices for compressed sensing under
-minimization. The performance of the proposed binary measurement
matrices is mainly theoretically analyzed with the help of the analyzing
methods and results from (finite geometry) LDPC codes. Particularly, several
lower bounds of the spark (i.e., the smallest number of columns that are
linearly dependent, which totally characterizes the recovery performance of
-minimization) of general binary matrices and finite geometry matrices
are obtained and they improve the previously known results in most cases.
Simulation results show that the proposed matrices perform comparably to,
sometimes even better than, the corresponding Gaussian random matrices.
Moreover, the proposed matrices are sparse, binary, and most of them have
cyclic or quasi-cyclic structure, which will make the hardware realization
convenient and easy.Comment: 12 pages, 11 figure
Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions
In this paper, we study the problem of compressed sensing using binary
measurement matrices and -norm minimization (basis pursuit) as the
recovery algorithm. We derive new upper and lower bounds on the number of
measurements to achieve robust sparse recovery with binary matrices. We
establish sufficient conditions for a column-regular binary matrix to satisfy
the robust null space property (RNSP) and show that the associated sufficient
conditions % sparsity bounds for robust sparse recovery obtained using the RNSP
are better by a factor of compared to the
sufficient conditions obtained using the restricted isometry property (RIP).
Next we derive universal \textit{lower} bounds on the number of measurements
that any binary matrix needs to have in order to satisfy the weaker sufficient
condition based on the RNSP and show that bipartite graphs of girth six are
optimal. Then we display two classes of binary matrices, namely parity check
matrices of array codes and Euler squares, which have girth six and are nearly
optimal in the sense of almost satisfying the lower bound. In principle,
randomly generated Gaussian measurement matrices are "order-optimal". So we
compare the phase transition behavior of the basis pursuit formulation using
binary array codes and Gaussian matrices and show that (i) there is essentially
no difference between the phase transition boundaries in the two cases and (ii)
the CPU time of basis pursuit with binary matrices is hundreds of times faster
than with Gaussian matrices and the storage requirements are less. Therefore it
is suggested that binary matrices are a viable alternative to Gaussian matrices
for compressed sensing using basis pursuit. \end{abstract}Comment: 28 pages, 3 figures, 5 table
Deterministic Construction of Binary, Bipolar and Ternary Compressed Sensing Matrices
In this paper we establish the connection between the Orthogonal Optical
Codes (OOC) and binary compressed sensing matrices. We also introduce
deterministic bipolar RIP fulfilling matrices of order
such that . The columns of these matrices are binary BCH code vectors where the
zeros are replaced by -1. Since the RIP is established by means of coherence,
the simple greedy algorithms such as Matching Pursuit are able to recover the
sparse solution from the noiseless samples. Due to the cyclic property of the
BCH codes, we show that the FFT algorithm can be employed in the reconstruction
methods to considerably reduce the computational complexity. In addition, we
combine the binary and bipolar matrices to form ternary sensing matrices
( elements) that satisfy the RIP condition.Comment: The paper is accepted for publication in IEEE Transaction on
Information Theor
Composition of Binary Compressed Sensing Matrices
In the recent past, various methods have been proposed to construct deterministic compressed sensing (CS) matrices. Of interest has been the construction of binary sensing matrices as they are useful for multiplierless and faster dimensionality reduction. In most of these binary constructions, the matrix size depends on primes or their powers. In this study, we propose a composition rule which exploits sparsity and block structure of existing binary CS matrices to construct matrices of general size. We also show that these matrices satisfy optimal theoretical guarantees and have similar density compared to matrices obtained using Kronecker product. Simulation work shows that the synthesized matrices provide comparable results against Gaussian random matrices
Polarization of the Renyi Information Dimension with Applications to Compressed Sensing
In this paper, we show that the Hadamard matrix acts as an extractor over the
reals of the Renyi information dimension (RID), in an analogous way to how it
acts as an extractor of the discrete entropy over finite fields. More
precisely, we prove that the RID of an i.i.d. sequence of mixture random
variables polarizes to the extremal values of 0 and 1 (corresponding to
discrete and continuous distributions) when transformed by a Hadamard matrix.
Further, we prove that the polarization pattern of the RID admits a closed form
expression and follows exactly the Binary Erasure Channel (BEC) polarization
pattern in the discrete setting. We also extend the results from the single- to
the multi-terminal setting, obtaining a Slepian-Wolf counterpart of the RID
polarization. We discuss applications of the RID polarization to Compressed
Sensing of i.i.d. sources. In particular, we use the RID polarization to
construct a family of deterministic -valued sensing matrices for
Compressed Sensing. We run numerical simulations to compare the performance of
the resulting matrices with that of random Gaussian and random Hadamard
matrices. The results indicate that the proposed matrices afford competitive
performances while being explicitly constructed.Comment: 12 pages, 2 figure
- …