59 research outputs found

    Universal Sampling Rate Distortion

    Full text link
    We examine the coordinated and universal rate-efficient sampling of a subset of correlated discrete memoryless sources followed by lossy compression of the sampled sources. The goal is to reconstruct a predesignated subset of sources within a specified level of distortion. The combined sampling mechanism and rate distortion code are universal in that they are devised to perform robustly without exact knowledge of the underlying joint probability distribution of the sources. In Bayesian as well as nonBayesian settings, single-letter characterizations are provided for the universal sampling rate distortion function for fixed-set sampling, independent random sampling and memoryless random sampling. It is illustrated how these sampling mechanisms are successively better. Our achievability proofs bring forth new schemes for joint source distribution-learning and lossy compression

    Sampling Rate Distortion

    Get PDF
    Consider a memoryless multiple source with m components of which a (possibly randomized) subset of k ≤ m components are sampled at each time instant and jointly compressed with the objective of reconstructing a prespecified subset of the m components under a given distortion criterion. The combined sampling and lossy compression mechanisms are to be designed to perform robustly with or without exact knowledge of the underlying joint probability distribution of the source. In this dissertation, we introduce a new framework of sampling rate distortion to study the tradeoffs among sampling mechanism, encoder-decoder structure, compression rate and the desired level of accuracy in the reconstruction. We begin with a discrete memoryless multiple source whose joint probability mass function (pmf) is taken to be known. A notion of sampling rate distortion function is introduced to study the mentioned tradeoffs, and is characterized first for fixed-set sampling. Next, for independent random sampling performed without the knowledge of the source outputs, it is shown that the sampling rate distortion function is the same whether or not the decoder is informed of the sequence of sampled sets. For memoryless random sampling, with the sampling depending on the source outputs, it is shown that deterministic sampling, characterized by a conditional point-mass, is optimal and suffices to achieve the sampling rate distortion function. Building on this, we consider a universal setting where the joint pmf of a discrete memoryless multiple source is known only to belong to a {\it finite} family of pmfs. In Bayesian and nonBayesian settings, single-letter characterizations are provided for the universal sampling rate distortion function for the fixed-set sampling, independent random sampling and memoryless random sampling. We show that these sampling mechanisms successively improve upon each other: (i) in their ability to enable an associated encoder approximate the underlying joint pmf and (ii) in their ability to choose appropriate subsets of the multiple source for compression by the encoder. Lastly, we consider a jointly Gaussian multiple memoryless source, to be reconstructed under a mean-squared error distortion criterion, with joint probability distribution function known only to belong to an uncountable family of probability density functions (characterized by a convex compact subset in Euclidean space). For fixed-set sampling, we characterize the universal sampling rate distortion function in Bayesian and nonBayesian settings. We also provide optimal reconstruction algorithms, of reduced complexity, which compress and reconstruct the sampled source components first under a modified distortion criterion, and then form MMSE estimates for the unsampled components based on reconstructions of the former. The questions addressed in this dissertation are motivated by various applications, e.g., dynamic thermal management for multicore processors, in-network computation and satellite imaging

    The Sampling Rate-Distortion Tradeoff for Sparsity Pattern Recovery in Compressed Sensing

    Full text link
    Recovery of the sparsity pattern (or support) of an unknown sparse vector from a limited number of noisy linear measurements is an important problem in compressed sensing. In the high-dimensional setting, it is known that recovery with a vanishing fraction of errors is impossible if the measurement rate and the per-sample signal-to-noise ratio (SNR) are finite constants, independent of the vector length. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases computationally simple estimators are near-optimal. Bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for several different recovery algorithms. The tightness of the bounds, in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing information-theoretic necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models

    "Compressed" Compressed Sensing

    Full text link
    The field of compressed sensing has shown that a sparse but otherwise arbitrary vector can be recovered exactly from a small number of randomly constructed linear projections (or samples). The question addressed in this paper is whether an even smaller number of samples is sufficient when there exists prior knowledge about the distribution of the unknown vector, or when only partial recovery is needed. An information-theoretic lower bound with connections to free probability theory and an upper bound corresponding to a computationally simple thresholding estimator are derived. It is shown that in certain cases (e.g. discrete valued vectors or large distortions) the number of samples can be decreased. Interestingly though, it is also shown that in many cases no reduction is possible

    Approximate Sparsity Pattern Recovery: Information-Theoretic Lower Bounds

    Full text link
    Recovery of the sparsity pattern (or support) of an unknown sparse vector from a small number of noisy linear measurements is an important problem in compressed sensing. In this paper, the high-dimensional setting is considered. It is shown that if the measurement rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector, then the optimal sparsity pattern estimate will have a constant fraction of errors. Lower bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing achievable bounds. Near optimality is shown for a wide variety of practically motivated signal models

    On the Performance of Turbo Signal Recovery with Partial DFT Sensing Matrices

    Full text link
    This letter is on the performance of the turbo signal recovery (TSR) algorithm for partial discrete Fourier transform (DFT) matrices based compressed sensing. Based on state evolution analysis, we prove that TSR with a partial DFT sensing matrix outperforms the well-known approximate message passing (AMP) algorithm with an independent identically distributed (IID) sensing matrix.Comment: to appear in IEEE Signal Processing Letter
    • …
    corecore