7,622 research outputs found

    Mismatched codebooks and the role of entropy-coding in lossy data compression

    Full text link
    We introduce a universal quantization scheme based on random coding, and we analyze its performance. This scheme consists of a source-independent random codebook (typically_mismatched_ to the source distribution), followed by optimal entropy-coding that is_matched_ to the quantized codeword distribution. A single-letter formula is derived for the rate achieved by this scheme at a given distortion, in the limit of large codebook dimension. The rate reduction due to entropy-coding is quantified, and it is shown that it can be arbitrarily large. In the special case of "almost uniform" codebooks (e.g., an i.i.d. Gaussian codebook with large variance) and difference distortion measures, a novel connection is drawn between the compression achieved by the present scheme and the performance of "universal" entropy-coded dithered lattice quantizers. This connection generalizes the "half-a-bit" bound on the redundancy of dithered lattice quantizers. Moreover, it demonstrates a strong notion of universality where a single "almost uniform" codebook is near-optimal for_any_ source and_any_ difference distortion measure.Comment: 35 pages, 37 references, no figures. Submitted to IEEE Transactions on Information Theor

    Improved Upper Bounds to the Causal Quadratic Rate-Distortion Function for Gaussian Stationary Sources

    Get PDF
    We improve the existing achievable rate regions for causal and for zero-delay source coding of stationary Gaussian sources under an average mean squared error (MSE) distortion measure. To begin with, we find a closed-form expression for the information-theoretic causal rate-distortion function (RDF) under such distortion measure, denoted by Rcit(D)R_{c}^{it}(D), for first-order Gauss-Markov processes. Rc^{it}(D) is a lower bound to the optimal performance theoretically attainable (OPTA) by any causal source code, namely Rc^{op}(D). We show that, for Gaussian sources, the latter can also be upper bounded as Rc^{op}(D)\leq Rc^{it}(D) + 0.5 log_{2}(2\pi e) bits/sample. In order to analyze Rcit(D)R_{c}^{it}(D) for arbitrary zero-mean Gaussian stationary sources, we introduce \bar{Rc^{it}}(D), the information-theoretic causal RDF when the reconstruction error is jointly stationary with the source. Based upon \bar{Rc^{it}}(D), we derive three closed-form upper bounds to the additive rate loss defined as \bar{Rc^{it}}(D) - R(D), where R(D) denotes Shannon's RDF. Two of these bounds are strictly smaller than 0.5 bits/sample at all rates. These bounds differ from one another in their tightness and ease of evaluation; the tighter the bound, the more involved its evaluation. We then show that, for any source spectral density and any positive distortion D\leq \sigma_{x}^{2}, \bar{Rc^{it}}(D) can be realized by an AWGN channel surrounded by a unique set of causal pre-, post-, and feedback filters. We show that finding such filters constitutes a convex optimization problem. In order to solve the latter, we propose an iterative optimization procedure that yields the optimal filters and is guaranteed to converge to \bar{Rc^{it}}(D). Finally, by establishing a connection to feedback quantization we design a causal and a zero-delay coding scheme which, for Gaussian sources, achieves...Comment: 47 pages, revised version submitted to IEEE Trans. Information Theor

    Information Nonanticipative Rate Distortion Function and Its Applications

    Full text link
    This paper investigates applications of nonanticipative Rate Distortion Function (RDF) in a) zero-delay Joint Source-Channel Coding (JSCC) design based on average and excess distortion probability, b) in bounding the Optimal Performance Theoretically Attainable (OPTA) by noncausal and causal codes, and computing the Rate Loss (RL) of zero-delay and causal codes with respect to noncausal codes. These applications are described using two running examples, the Binary Symmetric Markov Source with parameter p, (BSMS(p)) and the multidimensional partially observed Gaussian-Markov source. For the multidimensional Gaussian-Markov source with square error distortion, the solution of the nonanticipative RDF is derived, its operational meaning using JSCC design via a noisy coding theorem is shown by providing the optimal encoding-decoding scheme over a vector Gaussian channel, and the RL of causal and zero-delay codes with respect to noncausal codes is computed. For the BSMS(p) with Hamming distortion, the solution of the nonanticipative RDF is derived, the RL of causal codes with respect to noncausal codes is computed, and an uncoded noisy coding theorem based on excess distortion probability is shown. The information nonanticipative RDF is shown to be equivalent to the nonanticipatory epsilon-entropy, which corresponds to the classical RDF with an additional causality or nonanticipative condition imposed on the optimal reproduction conditional distribution.Comment: 34 pages, 12 figures, part of this paper was accepted for publication in IEEE International Symposium on Information Theory (ISIT), 2014 and in book Coordination Control of Distributed Systems of series Lecture Notes in Control and Information Sciences, 201

    The Distortion Rate Function of Cyclostationary Gaussian Processes

    Full text link
    A general expression for the distortion rate function (DRF) of cyclostationary Gaussian processes in terms of their spectral properties is derived. This expression can be seen as the result of orthogonalization over the different components in the polyphase decomposition of the process. We use this expression to derive, in a closed form, the DRF of several cyclostationary processes arising in practice. We first consider the DRF of a combined sampling and source coding problem. It is known that the optimal coding strategy for this problem involves source coding applied to a signal with the same structure as one resulting from pulse amplitude modulation (PAM). Since a PAM-modulated signal is cyclostationary, our DRF expression can be used to solve for the minimal distortion in the combined sampling and source coding problem. We also analyze in more detail the DRF of a source with the same structure as a PAM-modulated signal, and show that it is obtained by reverse waterfilling over an expression that depends on the energy of the pulse and the baseband process modulated to obtain the PAM signal. This result is then used to study the information content of a PAM-modulated signal as a function of its symbol time relative to the bandwidth of the underlying baseband process. In addition, we also study the DRF of sources with an amplitude-modulation structure, and show that the DRF of a narrow-band Gaussian stationary process modulated by either a deterministic or a random phase sine-wave equals the DRF of the baseband process.Comment: First revision for the IEEE Transactions on Information Theor

    Rate-distortion function via minimum mean square error estimation

    Full text link
    We derive a simple general parametric representation of the rate-distortion function of a memoryless source, where both the rate and the distortion are given by integrals whose integrands include the minimum mean square error (MMSE) of the distortion Δ=d(X,Y)\Delta=d(X,Y) based on the source symbol XX, with respect to a certain joint distribution of these two random variables. At first glance, these relations may seem somewhat similar to the I-MMSE relations due to Guo, Shamai and Verd\'u, but they are, in fact, quite different. The new relations among rate, distortion, and MMSE are discussed from several aspects, and more importantly, it is demonstrated that they can sometimes be rather useful for obtaining non-trivial upper and lower bounds on the rate-distortion function, as well as for determining the exact asymptotic behavior for very low and for very large distortion. Analogous MMSE relations hold for channel capacity as well.Comment: 11 pages, 1 figure, submitted for publication

    Optimal Linear Joint Source-Channel Coding with Delay Constraint

    Full text link
    The problem of joint source-channel coding is considered for a stationary remote (noisy) Gaussian source and a Gaussian channel. The encoder and decoder are assumed to be causal and their combined operations are subject to a delay constraint. It is shown that, under the mean-square error distortion metric, an optimal encoder-decoder pair from the linear and time-invariant (LTI) class can be found by minimization of a convex functional and a spectral factorization. The functional to be minimized is the sum of the well-known cost in a corresponding Wiener filter problem and a new term, which is induced by the channel noise and whose coefficient is the inverse of the channel's signal-to-noise ratio. This result is shown to also hold in the case of vector-valued signals, assuming parallel additive white Gaussian noise channels. It is also shown that optimal LTI encoders and decoders generally require infinite memory, which implies that approximations are necessary. A numerical example is provided, which compares the performance to the lower bound provided by rate-distortion theory.Comment: Submitted to IEEE Transactions on Information Theory on March 28th 201

    Linear code-based vector quantization for independent random variables

    Full text link
    In this paper we analyze the rate-distortion function R(D) achievable using linear codes over GF(q), where q is a prime number.Comment: 16 pages, 3 figure

    Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding

    Full text link
    We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. It is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to trade off distortion performance with encoding complexity. An example of such a trade-off as a function of the block length n is the following. With computational resource (space or time) per source sample of O((n/\log n)^2), for a fixed distortion-level above the Gaussian distortion-rate function, the probability of excess distortion decays exponentially in n. The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has good empirical performance, especially at low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor

    Rate-Distortion-Memory Trade-offs in Heterogeneous Caching Networks

    Full text link
    Caching at the wireless edge can be used to keep up with the increasing demand for high-definition wireless video streaming. By prefetching popular content into memory at wireless access points or end-user devices, requests can be served locally, relieving strain on expensive backhaul. In addition, using network coding allows the simultaneous serving of distinct cache misses via common coded multicast transmissions, resulting in significantly larger load reductions compared to those achieved with traditional delivery schemes. Most prior works simply treat video content as fixed-size files that users would like to fully download. This work is motivated by the fact that video can be coded in a scalable fashion and that the decoded video quality depends on the number of layers a user receives in sequence. Using a Gaussian source model, caching and coded delivery methods are designed to minimize the squared error distortion at end-user devices in a rate-limited caching network. The framework is very general and accounts for heterogeneous cache sizes, video popularities and user-file play-back qualities. As part of the solution, a new decentralized scheme for lossy cache-aided delivery subject to preset user distortion targets is proposed, which further generalizes prior literature to a setting with file heterogeneity.Comment: Submitted to Transactions on Wireless Communication

    Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits

    Full text link
    Processing, storing and communicating information that originates as an analog signal involves conversion of this information to bits. This conversion can be described by the combined effect of sampling and quantization, as illustrated in Fig. 1. The digital representation is achieved by first sampling the analog signal so as to represent it by a set of discrete-time samples and then quantizing these samples to a finite number of bits. Traditionally, these two operations are considered separately. The sampler is designed to minimize information loss due to sampling based on characteristics of the continuous-time input. The quantizer is designed to represent the samples as accurately as possible, subject to a constraint on the number of bits that can be used in the representation. The goal of this article is to revisit this paradigm by illuminating the dependency between these two operations. In particular, we explore the requirements on the sampling system subject to constraints on the available number of bits for storing, communicating or processing the analog information.Comment: to appear in "Signal Processing Magazine
    corecore