2,425 research outputs found

    Lossless Analog Compression

    Full text link
    We establish the fundamental limits of lossless analog compression by considering the recovery of arbitrary m-dimensional real random vectors x from the noiseless linear measurements y=Ax with n x m measurement matrix A. Our theory is inspired by the groundbreaking work of Wu and Verdu (2010) on almost lossless analog compression, but applies to the nonasymptotic, i.e., fixed-m case, and considers zero error probability. Specifically, our achievability result states that, for almost all A, the random vector x can be recovered with zero error probability provided that n > K(x), where K(x) is given by the infimum of the lower modified Minkowski dimension over all support sets U of x. We then particularize this achievability result to the class of s-rectifiable random vectors as introduced in Koliander et al. (2016); these are random vectors of absolutely continuous distribution---with respect to the s-dimensional Hausdorff measure---supported on countable unions of s-dimensional differentiable submanifolds of the m-dimensional real coordinate space. Countable unions of differentiable submanifolds include essentially all signal models used in the compressed sensing literature. Specifically, we prove that, for almost all A, s-rectifiable random vectors x can be recovered with zero error probability from n>s linear measurements. This threshold is, however, found not to be tight as exemplified by the construction of an s-rectifiable random vector that can be recovered with zero error probability from n<s linear measurements. This leads us to the introduction of the new class of s-analytic random vectors, which admit a strong converse in the sense of n greater than or equal to s being necessary for recovery with probability of error smaller than one. The central conceptual tools in the development of our theory are geometric measure theory and the theory of real analytic functions

    Lossless Linear Analog Compression

    Get PDF
    We establish the fundamental limits of lossless linear analog compression by considering the recovery of random vectors x∈Rm{\boldsymbol{\mathsf{x}}}\in{\mathbb R}^m from the noiseless linear measurements y=Ax{\boldsymbol{\mathsf{y}}}=\boldsymbol{A}{\boldsymbol{\mathsf{x}}} with measurement matrix A∈Rn×m\boldsymbol{A}\in{\mathbb R}^{n\times m}. Specifically, for a random vector x∈Rm{\boldsymbol{\mathsf{x}}}\in{\mathbb R}^m of arbitrary distribution we show that x{\boldsymbol{\mathsf{x}}} can be recovered with zero error probability from n>inf⁥dimâĄâ€ŸMB(U)n>\inf\underline{\operatorname{dim}}_\mathrm{MB}(U) linear measurements, where dimâĄâ€ŸMB(⋅)\underline{\operatorname{dim}}_\mathrm{MB}(\cdot) denotes the lower modified Minkowski dimension and the infimum is over all sets U⊆RmU\subseteq{\mathbb R}^{m} with P[x∈U]=1\mathbb{P}[{\boldsymbol{\mathsf{x}}}\in U]=1. This achievability statement holds for Lebesgue almost all measurement matrices A\boldsymbol{A}. We then show that ss-rectifiable random vectors---a stochastic generalization of ss-sparse vectors---can be recovered with zero error probability from n>sn>s linear measurements. From classical compressed sensing theory we would expect n≄sn\geq s to be necessary for successful recovery of x{\boldsymbol{\mathsf{x}}}. Surprisingly, certain classes of ss-rectifiable random vectors can be recovered from fewer than ss measurements. Imposing an additional regularity condition on the distribution of ss-rectifiable random vectors x{\boldsymbol{\mathsf{x}}}, we do get the expected converse result of ss measurements being necessary. The resulting class of random vectors appears to be new and will be referred to as ss-analytic random vectors

    Almost Lossless Analog Signal Separation

    Full text link
    We propose an information-theoretic framework for analog signal separation. Specifically, we consider the problem of recovering two analog signals from a noiseless sum of linear measurements of the signals. Our framework is inspired by the groundbreaking work of Wu and Verd\'u (2010) on almost lossless analog compression. The main results of the present paper are a general achievability bound for the compression rate in the analog signal separation problem, an exact expression for the optimal compression rate in the case of signals that have mixed discrete-continuous distributions, and a new technique for showing that the intersection of generic subspaces with subsets of sufficiently small Minkowski dimension is empty. This technique can also be applied to obtain a simplified proof of a key result in Wu and Verd\'u (2010).Comment: To be presented at IEEE Int. Symp. Inf. Theory 2013, Istanbul, Turke

    Universal Compressed Sensing

    Full text link
    In this paper, the problem of developing universal algorithms for compressed sensing of stochastic processes is studied. First, R\'enyi's notion of information dimension (ID) is generalized to analog stationary processes. This provides a measure of complexity for such processes and is connected to the number of measurements required for their accurate recovery. Then a minimum entropy pursuit (MEP) optimization approach is proposed, and it is proven that it can reliably recover any stationary process satisfying some mixing constraints from sufficient number of randomized linear measurements, without having any prior information about the distribution of the process. It is proved that a Lagrangian-type approximation of the MEP optimization problem, referred to as Lagrangian-MEP problem, is identical to a heuristic implementable algorithm proposed by Baron et al. It is shown that for the right choice of parameters the Lagrangian-MEP algorithm, in addition to having the same asymptotic performance as MEP optimization, is also robust to the measurement noise. For memoryless sources with a discrete-continuous mixture distribution, the fundamental limits of the minimum number of required measurements by a non-universal compressed sensing decoder is characterized by Wu et al. For such sources, it is proved that there is no loss in universal coding, and both the MEP and the Lagrangian-MEP asymptotically achieve the optimal performance

    New Uniform Bounds for Almost Lossless Analog Compression

    Full text link
    Wu and Verd\'u developed a theory of almost lossless analog compression, where one imposes various regularity conditions on the compressor and the decompressor with the input signal being modelled by a (typically infinite-entropy) stationary stochastic process. In this work we consider all stationary stochastic processes with trajectories in a prescribed set S⊂[0,1]Z\mathcal{S} \subset [0,1]^\mathbb{Z} of (bi)infinite sequences and find uniform lower and upper bounds for certain compression rates in terms of metric mean dimension and mean box dimension. An essential tool is the recent Lindenstrauss-Tsukamoto variational principle expressing metric mean dimension in terms of rate-distortion functions.Comment: This paper is going to be presented at 2019 IEEE International Symposium on Information Theory. It is a short version of arXiv:1812.0045

    Almost Lossless Analog Compression without Phase Information

    Full text link
    We propose an information-theoretic framework for phase retrieval. Specifically, we consider the problem of recovering an unknown n-dimensional vector x up to an overall sign factor from m=Rn phaseless measurements with compression rate R and derive a general achievability bound for R. Surprisingly, it turns out that this bound on the compression rate is the same as the one for almost lossless analog compression obtained by Wu and Verd\'u (2010): Phaseless linear measurements are as good as linear measurements with full phase information in the sense that ignoring the sign of m measurements only leaves us with an ambiguity with respect to an overall sign factor of x

    Polarization of the Renyi Information Dimension with Applications to Compressed Sensing

    Full text link
    In this paper, we show that the Hadamard matrix acts as an extractor over the reals of the Renyi information dimension (RID), in an analogous way to how it acts as an extractor of the discrete entropy over finite fields. More precisely, we prove that the RID of an i.i.d. sequence of mixture random variables polarizes to the extremal values of 0 and 1 (corresponding to discrete and continuous distributions) when transformed by a Hadamard matrix. Further, we prove that the polarization pattern of the RID admits a closed form expression and follows exactly the Binary Erasure Channel (BEC) polarization pattern in the discrete setting. We also extend the results from the single- to the multi-terminal setting, obtaining a Slepian-Wolf counterpart of the RID polarization. We discuss applications of the RID polarization to Compressed Sensing of i.i.d. sources. In particular, we use the RID polarization to construct a family of deterministic ±1\pm 1-valued sensing matrices for Compressed Sensing. We run numerical simulations to compare the performance of the resulting matrices with that of random Gaussian and random Hadamard matrices. The results indicate that the proposed matrices afford competitive performances while being explicitly constructed.Comment: 12 pages, 2 figure

    Joint source-channel coding with feedback

    Get PDF
    This paper quantifies the fundamental limits of variable-length transmission of a general (possibly analog) source over a memoryless channel with noiseless feedback, under a distortion constraint. We consider excess distortion, average distortion and guaranteed distortion (dd-semifaithful codes). In contrast to the asymptotic fundamental limit, a general conclusion is that allowing variable-length codes and feedback leads to a sizable improvement in the fundamental delay-distortion tradeoff. In addition, we investigate the minimum energy required to reproduce kk source samples with a given fidelity after transmission over a memoryless Gaussian channel, and we show that the required minimum energy is reduced with feedback and an average (rather than maximal) power constraint.Comment: To appear in IEEE Transactions on Information Theor
    • 

    corecore