9 research outputs found

    Opportunistic scheduling with limited channel state information: A rate distortion approach

    Get PDF
    We consider an opportunistic communication system in which a transmitter selects one of multiple channels over which to schedule a transmission, based on partial knowledge of the network state. We characterize a fundamental limit on the rate that channel state information must be conveyed to the transmitter in order to meet a constraint on expected throughput. This problem is modeled as a causal rate distortion optimization of a Markov source. We introduce a novel distortion metric capturing the impact of imperfect channel state information on throughput. We compute a closed-form expression for the causal information rate distortion function for the case of two channels, as well as an algorithmic upper bound on the causal rate distortion function. Finally, we characterize the gap between the causal information rate distortion and the causal entropic rate-distortion functions.National Science Foundation (U.S.) (Grant CNS-0915988)National Science Foundation (U.S.) (Grant CNS-1217048)United States. Army Research Office. Multidisciplinary University Research Initiative (Grant W911NF-08-1-0238)United States. Office of Naval Research (Grant N00014-12-1-0064)National Science Foundation (U.S.). Center for Science of Information (Grant CCF-09-39370

    A Universal Scheme for Wyner–Ziv Coding of Discrete Sources

    Get PDF
    We consider the Wyner–Ziv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by Lempel–Ziv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser (DUDE) algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes

    Lossy compression of discrete sources via Viterbi algorithm

    Full text link
    We present a new lossy compressor for discrete-valued sources. For coding a sequence xnx^n, the encoder starts by assigning a certain cost to each possible reconstruction sequence. It then finds the one that minimizes this cost and describes it losslessly to the decoder via a universal lossless compressor. The cost of each sequence is a linear combination of its distance from the sequence xnx^n and a linear function of its kthk^{\rm th} order empirical distribution. The structure of the cost function allows the encoder to employ the Viterbi algorithm to recover the minimizer of the cost. We identify a choice of the coefficients comprising the linear function of the empirical distribution used in the cost function which ensures that the algorithm universally achieves the optimum rate-distortion performance of any stationary ergodic source in the limit of large nn, provided that kk diverges as o(log⁥n)o(\log n). Iterative techniques for approximating the coefficients, which alleviate the computational burden of finding the optimal coefficients, are proposed and studied.Comment: 26 pages, 6 figures, Submitted to IEEE Transactions on Information Theor

    Compression-Based Compressed Sensing

    Full text link
    Modern compression algorithms exploit complex structures that are present in signals to describe them very efficiently. On the other hand, the field of compressed sensing is built upon the observation that "structured" signals can be recovered from their under-determined set of linear projections. Currently, there is a large gap between the complexity of the structures studied in the area of compressed sensing and those employed by the state-of-the-art compression codes. Recent results in the literature on deterministic signals aim at bridging this gap through devising compressed sensing decoders that employ compression codes. This paper focuses on structured stochastic processes and studies the application of rate-distortion codes to compressed sensing of such signals. The performance of the formerly-proposed compressible signal pursuit (CSP) algorithm is studied in this stochastic setting. It is proved that in the very low distortion regime, as the blocklength grows to infinity, the CSP algorithm reliably and robustly recovers nn instances of a stationary process from random linear projections as long as their count is slightly more than nn times the rate-distortion dimension (RDD) of the source. It is also shown that under some regularity conditions, the RDD of a stationary process is equal to its information dimension (ID). This connection establishes the optimality of the CSP algorithm at least for memoryless stationary sources, for which the fundamental limits are known. Finally, it is shown that the CSP algorithm combined by a family of universal variable-length fixed-distortion compression codes yields a family of universal compressed sensing recovery algorithms

    Hypergraph-based Source Codes for Function Computation Under Maximal Distortion

    Full text link
    This work investigates functional source coding problems with maximal distortion, motivated by approximate function computation in many modern applications. The maximal distortion treats imprecise reconstruction of a function value as good as perfect computation if it deviates less than a tolerance level, while treating reconstruction that differs by more than that level as a failure. Using a geometric understanding of the maximal distortion, we propose a hypergraph-based source coding scheme for function computation that is constructive in the sense that it gives an explicit procedure for defining auxiliary random variables. Moreover, we find that the hypergraph-based coding scheme achieves the optimal rate-distortion function in the setting of coding for computing with side information and the Berger-Tung sum-rate inner bound in the setting of distributed source coding for computing. It also achieves the El Gamal-Cover inner bound for multiple description coding for computing and is optimal for successive refinement and cascade multiple description problems for computing. Lastly, the benefit of complexity reduction of finding a forward test channel is shown for a class of Markov sources

    Rate-Distortion via Markov Chain Monte Carlo

    Full text link
    We propose an approach to lossy source coding, utilizing ideas from Gibbs sampling, simulated annealing, and Markov Chain Monte Carlo (MCMC). The idea is to sample a reconstruction sequence from a Boltzmann distribution associated with an energy function that incorporates the distortion between the source and reconstruction, the compressibility of the reconstruction, and the point sought on the rate-distortion curve. To sample from this distribution, we use a `heat bath algorithm': Starting from an initial candidate reconstruction (say the original source sequence), at every iteration, an index i is chosen and the i-th sequence component is replaced by drawing from the conditional probability distribution for that component given all the rest. At the end of this process, the encoder conveys the reconstruction to the decoder using universal lossless compression. The complexity of each iteration is independent of the sequence length and only linearly dependent on a certain context parameter (which grows sub-logarithmically with the sequence length). We show that the proposed algorithms achieve optimum rate-distortion performance in the limits of large number of iterations, and sequence length, when employed on any stationary ergodic source. Experimentation shows promising initial results. Employing our lossy compressors on noisy data, with appropriately chosen distortion measure and level, followed by a simple de-randomization operation, results in a family of denoisers that compares favorably (both theoretically and in practice) with other MCMC-based schemes, and with the Discrete Universal Denoiser (DUDE).Comment: 35 pages, 16 figures, Submitted to IEEE Transactions on Information Theor

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i = aU_(i–1) + Z_i for i ≄ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n; d; Δ) approaches the rate-distortion function R(d) as R(n; d; Δ) = R(d)+ √ V(d)/n Q–1(Δ)+o(1√n), where V (d) is the dispersion, Δ Δ 2 (0; 1) is the excess-distortion probability, and Q^(-1) is the inverse Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 < d ≄ σ^2 (1+|σ|)^2, R(n; d; Δ) of the Gauss-Markov source coincides with that of Z_i, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i=aU_(i-1)+ Z_i for i ≄ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n, d, Δ) approaches the rate-distortion function R(d) as R(n, d, Δ) = R(d)+√{[V(d)/n]}Q^(-1)(Δ)+o([1/(√n)]), where V(d) is the dispersion, Δ ∈ (0,1) is the excess-distortion probability, and Q^(-1) is the inverse of the Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 <; d ≀ σ2/(1+|a|)^2 ,R(n, d, c) of the Gauss-Markov source coincides with that of Zi, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i = aU_(i–1) + Z_i for i ≄ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n; d; Δ) approaches the rate-distortion function R(d) as R(n; d; Δ) = R(d)+ √ V(d)/n Q–1(Δ)+o(1√n), where V (d) is the dispersion, Δ Δ 2 (0; 1) is the excess-distortion probability, and Q^(-1) is the inverse Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 < d ≄ σ^2 (1+|σ|)^2, R(n; d; Δ) of the Gauss-Markov source coincides with that of Z_i, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations
    corecore