961 research outputs found

    Universal Compressed Sensing

    Full text link
    In this paper, the problem of developing universal algorithms for compressed sensing of stochastic processes is studied. First, R\'enyi's notion of information dimension (ID) is generalized to analog stationary processes. This provides a measure of complexity for such processes and is connected to the number of measurements required for their accurate recovery. Then a minimum entropy pursuit (MEP) optimization approach is proposed, and it is proven that it can reliably recover any stationary process satisfying some mixing constraints from sufficient number of randomized linear measurements, without having any prior information about the distribution of the process. It is proved that a Lagrangian-type approximation of the MEP optimization problem, referred to as Lagrangian-MEP problem, is identical to a heuristic implementable algorithm proposed by Baron et al. It is shown that for the right choice of parameters the Lagrangian-MEP algorithm, in addition to having the same asymptotic performance as MEP optimization, is also robust to the measurement noise. For memoryless sources with a discrete-continuous mixture distribution, the fundamental limits of the minimum number of required measurements by a non-universal compressed sensing decoder is characterized by Wu et al. For such sources, it is proved that there is no loss in universal coding, and both the MEP and the Lagrangian-MEP asymptotically achieve the optimal performance

    New Uniform Bounds for Almost Lossless Analog Compression

    Full text link
    Wu and Verd\'u developed a theory of almost lossless analog compression, where one imposes various regularity conditions on the compressor and the decompressor with the input signal being modelled by a (typically infinite-entropy) stationary stochastic process. In this work we consider all stationary stochastic processes with trajectories in a prescribed set S⊂[0,1]Z\mathcal{S} \subset [0,1]^\mathbb{Z} of (bi)infinite sequences and find uniform lower and upper bounds for certain compression rates in terms of metric mean dimension and mean box dimension. An essential tool is the recent Lindenstrauss-Tsukamoto variational principle expressing metric mean dimension in terms of rate-distortion functions.Comment: This paper is going to be presented at 2019 IEEE International Symposium on Information Theory. It is a short version of arXiv:1812.0045

    Polarization of the Renyi Information Dimension with Applications to Compressed Sensing

    Full text link
    In this paper, we show that the Hadamard matrix acts as an extractor over the reals of the Renyi information dimension (RID), in an analogous way to how it acts as an extractor of the discrete entropy over finite fields. More precisely, we prove that the RID of an i.i.d. sequence of mixture random variables polarizes to the extremal values of 0 and 1 (corresponding to discrete and continuous distributions) when transformed by a Hadamard matrix. Further, we prove that the polarization pattern of the RID admits a closed form expression and follows exactly the Binary Erasure Channel (BEC) polarization pattern in the discrete setting. We also extend the results from the single- to the multi-terminal setting, obtaining a Slepian-Wolf counterpart of the RID polarization. We discuss applications of the RID polarization to Compressed Sensing of i.i.d. sources. In particular, we use the RID polarization to construct a family of deterministic ±1\pm 1-valued sensing matrices for Compressed Sensing. We run numerical simulations to compare the performance of the resulting matrices with that of random Gaussian and random Hadamard matrices. The results indicate that the proposed matrices afford competitive performances while being explicitly constructed.Comment: 12 pages, 2 figure

    Efficient and Robust Compressed Sensing Using Optimized Expander Graphs

    Get PDF
    Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any n-dimensional vector that is k-sparse can be fully recovered using O(klog n) measurements and only O(klog n) simple recovery iterations. In this paper, we improve upon this result by considering expander graphs with expansion coefficient beyond 3/4 and show that, with the same number of measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be reduced arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple priority queue with total recovery time O(nlog(n/k))). We also show that by tolerating a small penal- ty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expander-graph-based methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the time complexity and the simplicity of recovery. Finally, we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost k-sparse signal and then, using very simple optimization techniques, finds a k-sparse signal which is close to the best k-term approximation of the original signal
    • …
    corecore