582 research outputs found

    New Uniform Bounds for Almost Lossless Analog Compression

    Full text link
    Wu and Verd\'u developed a theory of almost lossless analog compression, where one imposes various regularity conditions on the compressor and the decompressor with the input signal being modelled by a (typically infinite-entropy) stationary stochastic process. In this work we consider all stationary stochastic processes with trajectories in a prescribed set S⊂[0,1]Z\mathcal{S} \subset [0,1]^\mathbb{Z} of (bi)infinite sequences and find uniform lower and upper bounds for certain compression rates in terms of metric mean dimension and mean box dimension. An essential tool is the recent Lindenstrauss-Tsukamoto variational principle expressing metric mean dimension in terms of rate-distortion functions.Comment: This paper is going to be presented at 2019 IEEE International Symposium on Information Theory. It is a short version of arXiv:1812.0045

    Lossless Linear Analog Compression

    Get PDF
    We establish the fundamental limits of lossless linear analog compression by considering the recovery of random vectors x∈Rm{\boldsymbol{\mathsf{x}}}\in{\mathbb R}^m from the noiseless linear measurements y=Ax{\boldsymbol{\mathsf{y}}}=\boldsymbol{A}{\boldsymbol{\mathsf{x}}} with measurement matrix A∈Rn×m\boldsymbol{A}\in{\mathbb R}^{n\times m}. Specifically, for a random vector x∈Rm{\boldsymbol{\mathsf{x}}}\in{\mathbb R}^m of arbitrary distribution we show that x{\boldsymbol{\mathsf{x}}} can be recovered with zero error probability from n>inf⁥dimâĄâ€ŸMB(U)n>\inf\underline{\operatorname{dim}}_\mathrm{MB}(U) linear measurements, where dimâĄâ€ŸMB(⋅)\underline{\operatorname{dim}}_\mathrm{MB}(\cdot) denotes the lower modified Minkowski dimension and the infimum is over all sets U⊆RmU\subseteq{\mathbb R}^{m} with P[x∈U]=1\mathbb{P}[{\boldsymbol{\mathsf{x}}}\in U]=1. This achievability statement holds for Lebesgue almost all measurement matrices A\boldsymbol{A}. We then show that ss-rectifiable random vectors---a stochastic generalization of ss-sparse vectors---can be recovered with zero error probability from n>sn>s linear measurements. From classical compressed sensing theory we would expect n≄sn\geq s to be necessary for successful recovery of x{\boldsymbol{\mathsf{x}}}. Surprisingly, certain classes of ss-rectifiable random vectors can be recovered from fewer than ss measurements. Imposing an additional regularity condition on the distribution of ss-rectifiable random vectors x{\boldsymbol{\mathsf{x}}}, we do get the expected converse result of ss measurements being necessary. The resulting class of random vectors appears to be new and will be referred to as ss-analytic random vectors

    Compression-Based Compressed Sensing

    Full text link
    Modern compression algorithms exploit complex structures that are present in signals to describe them very efficiently. On the other hand, the field of compressed sensing is built upon the observation that "structured" signals can be recovered from their under-determined set of linear projections. Currently, there is a large gap between the complexity of the structures studied in the area of compressed sensing and those employed by the state-of-the-art compression codes. Recent results in the literature on deterministic signals aim at bridging this gap through devising compressed sensing decoders that employ compression codes. This paper focuses on structured stochastic processes and studies the application of rate-distortion codes to compressed sensing of such signals. The performance of the formerly-proposed compressible signal pursuit (CSP) algorithm is studied in this stochastic setting. It is proved that in the very low distortion regime, as the blocklength grows to infinity, the CSP algorithm reliably and robustly recovers nn instances of a stationary process from random linear projections as long as their count is slightly more than nn times the rate-distortion dimension (RDD) of the source. It is also shown that under some regularity conditions, the RDD of a stationary process is equal to its information dimension (ID). This connection establishes the optimality of the CSP algorithm at least for memoryless stationary sources, for which the fundamental limits are known. Finally, it is shown that the CSP algorithm combined by a family of universal variable-length fixed-distortion compression codes yields a family of universal compressed sensing recovery algorithms

    Polarization of the Renyi Information Dimension with Applications to Compressed Sensing

    Full text link
    In this paper, we show that the Hadamard matrix acts as an extractor over the reals of the Renyi information dimension (RID), in an analogous way to how it acts as an extractor of the discrete entropy over finite fields. More precisely, we prove that the RID of an i.i.d. sequence of mixture random variables polarizes to the extremal values of 0 and 1 (corresponding to discrete and continuous distributions) when transformed by a Hadamard matrix. Further, we prove that the polarization pattern of the RID admits a closed form expression and follows exactly the Binary Erasure Channel (BEC) polarization pattern in the discrete setting. We also extend the results from the single- to the multi-terminal setting, obtaining a Slepian-Wolf counterpart of the RID polarization. We discuss applications of the RID polarization to Compressed Sensing of i.i.d. sources. In particular, we use the RID polarization to construct a family of deterministic ±1\pm 1-valued sensing matrices for Compressed Sensing. We run numerical simulations to compare the performance of the resulting matrices with that of random Gaussian and random Hadamard matrices. The results indicate that the proposed matrices afford competitive performances while being explicitly constructed.Comment: 12 pages, 2 figure

    Metric mean dimension and analog compression

    Full text link
    Wu and Verd\'u developed a theory of almost lossless analog compression, where one imposes various regularity conditions on the compressor and the decompressor with the input signal being modelled by a (typically infinite-entropy) stationary stochastic process. In this work we consider all stationary stochastic processes with trajectories in a prescribed set of (bi-)infinite sequences and find uniform lower and upper bounds for certain compression rates in terms of metric mean dimension and mean box dimension. An essential tool is the recent Lindenstrauss-Tsukamoto variational principle expressing metric mean dimension in terms of rate-distortion functions. We obtain also lower bounds on compression rates for a fixed stationary process in terms of the rate-distortion dimension rates and study several examples.Comment: v3: Accepted for publication in IEEE Transactions on Information Theory. Additional examples were added. Material have been reorganized (with some parts removed). Minor mistakes were correcte

    Lossless Analog Compression

    Full text link
    We establish the fundamental limits of lossless analog compression by considering the recovery of arbitrary m-dimensional real random vectors x from the noiseless linear measurements y=Ax with n x m measurement matrix A. Our theory is inspired by the groundbreaking work of Wu and Verdu (2010) on almost lossless analog compression, but applies to the nonasymptotic, i.e., fixed-m case, and considers zero error probability. Specifically, our achievability result states that, for almost all A, the random vector x can be recovered with zero error probability provided that n > K(x), where K(x) is given by the infimum of the lower modified Minkowski dimension over all support sets U of x. We then particularize this achievability result to the class of s-rectifiable random vectors as introduced in Koliander et al. (2016); these are random vectors of absolutely continuous distribution---with respect to the s-dimensional Hausdorff measure---supported on countable unions of s-dimensional differentiable submanifolds of the m-dimensional real coordinate space. Countable unions of differentiable submanifolds include essentially all signal models used in the compressed sensing literature. Specifically, we prove that, for almost all A, s-rectifiable random vectors x can be recovered with zero error probability from n>s linear measurements. This threshold is, however, found not to be tight as exemplified by the construction of an s-rectifiable random vector that can be recovered with zero error probability from n<s linear measurements. This leads us to the introduction of the new class of s-analytic random vectors, which admit a strong converse in the sense of n greater than or equal to s being necessary for recovery with probability of error smaller than one. The central conceptual tools in the development of our theory are geometric measure theory and the theory of real analytic functions

    Achieving the Fundamental Limit of Lossless Analog Compression via Polarization

    Full text link
    In this paper, we study the lossless analog compression for i.i.d. nonsingular signals via the polarization-based framework. We prove that for nonsingular source, the error probability of maximum a posteriori (MAP) estimation polarizes under the Hadamard transform, which extends the polarization phenomenon to analog domain. Building on this insight, we propose partial Hadamard compression and develop the corresponding analog successive cancellation (SC) decoder. The proposed scheme consists of deterministic measurement matrices and non-iterative reconstruction algorithm, providing benefits in both space and computational complexity. Using the polarization of error probability, we prove that our approach achieves the information-theoretical limit for lossless analog compression developed by Wu and Verdu.Comment: 48 pages, 5 figures. This work was presented in part at the 2023 IEEE Global Communications Conferenc

    Worst-Case Analysis of Electrical and Electronic Equipment via Affine Arithmetic

    Get PDF
    In the design and fabrication process of electronic equipment, there are many unkown parameters which significantly affect the product performance. Some uncertainties are due to manufacturing process fluctuations, while others due to the environment such as operating temperature, voltage, and various ambient aging stressors. It is desirable to consider these uncertainties to ensure product performance, improve yield, and reduce design cost. Since direct electromagnetic compatibility measurements impact on both cost and time-to-market, there has been a growing demand for the availability of tools enabling the simulation of electrical and electronic equipment with the inclusion of the effects of system uncertainties. In this framework, the assessment of device response is no longer regarded as deterministic but as a random process. It is traditionally analyzed using the Monte Carlo or other sampling-based methods. The drawback of the above methods is large number of required samples to converge, which are time-consuming for practical applications. As an alternative, the inherent worst-case approaches such as interval analysis directly provide an estimation of the true bounds of the responses. However, such approaches might provide unnecessarily strict margins, which are very unlikely to occur. A recent technique, affine arithmetic, advances the interval based methods by means of handling correlated intervals. However, it still leads to over-conservatism due to the inability of considering probability information. The objective of this thesis is to improve the accuracy of the affine arithmetic and broaden its application in frequency-domain analysis. We first extend the existing literature results to the efficient time-domain analysis of lumped circuits considering the uncertainties. Then we provide an extension of the basic affine arithmetic to the frequency-domain simulation of circuits. Classical tools for circuit analysis are used within a modified affine framework accounting for complex algebra and uncertainty interval partitioning for the accurate and efficient computation of the worst case bounds of the responses of both lumped and distributed circuits. The performance of the proposed approach is investigated through extensive simulations in several case studies. The simulation results are compared with the Monte Carlo method in terms of both simulation time and accuracy

    Supernova / Acceleration Probe: A Satellite Experiment to Study the Nature of the Dark Energy

    Full text link
    The Supernova / Acceleration Probe (SNAP) is a proposed space-based experiment designed to study the dark energy and alternative explanations of the acceleration of the Universe's expansion by performing a series of complementary systematics-controlled measurements. We describe a self-consistent reference mission design for building a Type Ia supernova Hubble diagram and for performing a wide-area weak gravitational lensing study. A 2-m wide-field telescope feeds a focal plane consisting of a 0.7 square-degree imager tiled with equal areas of optical CCDs and near infrared sensors, and a high-efficiency low-resolution integral field spectrograph. The SNAP mission will obtain high-signal-to-noise calibrated light-curves and spectra for several thousand supernovae at redshifts between z=0.1 and 1.7. A wide-field survey covering one thousand square degrees resolves ~100 galaxies per square arcminute. If we assume we live in a cosmological-constant-dominated Universe, the matter density, dark energy density, and flatness of space can all be measured with SNAP supernova and weak-lensing measurements to a systematics-limited accuracy of 1%. For a flat universe, the density-to-pressure ratio of dark energy can be similarly measured to 5% for the present value w0 and ~0.1 for the time variation w'. The large survey area, depth, spatial resolution, time-sampling, and nine-band optical to NIR photometry will support additional independent and/or complementary dark-energy measurement approaches as well as a broad range of auxiliary science programs. (Abridged)Comment: 40 pages, 18 figures, submitted to PASP, http://snap.lbl.go
    • 

    corecore