23 research outputs found

    Improved Upper Bounds to the Causal Quadratic Rate-Distortion Function for Gaussian Stationary Sources

    Get PDF
    We improve the existing achievable rate regions for causal and for zero-delay source coding of stationary Gaussian sources under an average mean squared error (MSE) distortion measure. To begin with, we find a closed-form expression for the information-theoretic causal rate-distortion function (RDF) under such distortion measure, denoted by Rcit(D)R_{c}^{it}(D), for first-order Gauss-Markov processes. Rc^{it}(D) is a lower bound to the optimal performance theoretically attainable (OPTA) by any causal source code, namely Rc^{op}(D). We show that, for Gaussian sources, the latter can also be upper bounded as Rc^{op}(D)\leq Rc^{it}(D) + 0.5 log_{2}(2\pi e) bits/sample. In order to analyze Rcit(D)R_{c}^{it}(D) for arbitrary zero-mean Gaussian stationary sources, we introduce \bar{Rc^{it}}(D), the information-theoretic causal RDF when the reconstruction error is jointly stationary with the source. Based upon \bar{Rc^{it}}(D), we derive three closed-form upper bounds to the additive rate loss defined as \bar{Rc^{it}}(D) - R(D), where R(D) denotes Shannon's RDF. Two of these bounds are strictly smaller than 0.5 bits/sample at all rates. These bounds differ from one another in their tightness and ease of evaluation; the tighter the bound, the more involved its evaluation. We then show that, for any source spectral density and any positive distortion D\leq \sigma_{x}^{2}, \bar{Rc^{it}}(D) can be realized by an AWGN channel surrounded by a unique set of causal pre-, post-, and feedback filters. We show that finding such filters constitutes a convex optimization problem. In order to solve the latter, we propose an iterative optimization procedure that yields the optimal filters and is guaranteed to converge to \bar{Rc^{it}}(D). Finally, by establishing a connection to feedback quantization we design a causal and a zero-delay coding scheme which, for Gaussian sources, achieves...Comment: 47 pages, revised version submitted to IEEE Trans. Information Theor

    Comments on "A Framework for Control System Design Subject to Average Data-Rate Constraints"

    Get PDF
    Theorem~ 4.1 in the 2011 paper "A Framework for Control System Design Subject to Average Data-Rate Constraints" allows one to lower bound average operational data rates in feedback loops (including the situation in which encoder and decoder have side information). Unfortunately, its proof is invalid. In this note we first state the theorem and explain why its proof is flawed, and then provide a correct proof under weaker assumptions.Comment: Submitted to IEEE Transactions on Automatic Contro

    The Quadratic Gaussian Rate-Distortion Function for Source Uncorrelated Distortions

    Full text link
    We characterize the rate-distortion function for zero-mean stationary Gaussian sources under the MSE fidelity criterion and subject to the additional constraint that the distortion is uncorrelated to the input. The solution is given by two equations coupled through a single scalar parameter. This has a structure similar to the well known water-filling solution obtained without the uncorrelated distortion restriction. Our results fully characterize the unique statistics of the optimal distortion. We also show that, for all positive distortions, the minimum achievable rate subject to the uncorrelation constraint is strictly larger than that given by the un-constrained rate-distortion function. This gap increases with the distortion and tends to infinity and zero, respectively, as the distortion tends to zero and infinity.Comment: Revised version, to be presented at the Data Compression Conference 200

    The Entropy Gain of Linear Systems and Some of Its Implications

    Get PDF
    We study the increase in per-sample differential entropy rate of random sequences and processes after being passed through a non minimum-phase (NMP) discrete-time, linear time-invariant (LTI) filter G. For LTI discrete-time filters and random processes, it has long been established by Theorem 14 in Shannon’s seminal paper that this entropy gain, G(G), equals the integral of log|G(ejω)|. In this note, we first show that Shannon’s Theorem 14 does not hold in general. Then, we prove that, when comparing the input differential entropy to that of the entire (longer) output of G, the entropy gain equals G(G). We show that the entropy gain between equal-length input and output sequences is upper bounded by G(G) and arises if and only if there exists an output additive disturbance with finite differential entropy (no matter how small) or a random initial state. Unlike what happens with linear maps, the entropy gain in this case depends on the distribution of all the signals involved. We illustrate some of the consequences of these results by presenting their implications in three different problems. Specifically: conditions for equality in an information inequality of importance in networked control problems; extending to a much broader class of sources the existing results on the rate-distortion function for non-stationary Gaussian sources, and an observation on the capacity of auto-regressive Gaussian channels with feedback

    Approaching the capacity of two‐pair bidirectional Gaussian relay networks

    No full text
    corecore