504 research outputs found

    Lossy Source Coding with Gaussian or Erased Side-Information

    Get PDF
    In this paper we find properties that are shared between two seemingly unrelated lossy source coding setups with side information. The first setup is when the source and side information are jointly Gaussian and the distortion measure is quadratic. The second setup is when the side information is an erased version of the source. We begin with the observation that in both these cases the Wyner-Ziv and conditional rate-distortion functions are equal. We further find that there is a continuum of optimal strategies for the conditional rate distortion problem in both these setups. Next, we consider the case when there are two decoders with access to different side-information sources. For the case when the encoder has access to the side information we establish bounds on the rate-distortion function and a sufficient condition for tightness. Under this condition, we find a characterization of the rate-distortion function for physically degraded side information. This characterization holds for both the Gaussian and erasure setups

    Source Coding with Fixed Lag Side Information

    Full text link
    We consider source coding with fixed lag side information at the decoder. We focus on the special case of perfect side information with unit lag corresponding to source coding with feedforward (the dual of channel coding with feedback) introduced by Pradhan. We use this duality to develop a linear complexity algorithm which achieves the rate-distortion bound for any memoryless finite alphabet source and distortion measure.Comment: 10 pages, 3 figure

    Nonasymptotic noisy lossy source coding

    Get PDF
    This paper shows new general nonasymptotic achievability and converse bounds and performs their dispersion analysis for the lossy compression problem in which the compressor observes the source through a noisy channel. While this problem is asymptotically equivalent to a noiseless lossy source coding problem with a modified distortion function, nonasymptotically there is a noticeable gap in how fast their minimum achievable coding rates approach the common rate-distortion function, as evidenced both by the refined asymptotic analysis (dispersion) and the numerical results. The size of the gap between the dispersions of the noisy problem and the asymptotically equivalent noiseless problem depends on the stochastic variability of the channel through which the compressor observes the source.Comment: IEEE Transactions on Information Theory, 201

    A Universal Scheme for Wyner–Ziv Coding of Discrete Sources

    Get PDF
    We consider the Wyner–Ziv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by Lempel–Ziv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser (DUDE) algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes

    On privacy amplification, lossy compression, and their duality to channel coding

    Full text link
    We examine the task of privacy amplification from information-theoretic and coding-theoretic points of view. In the former, we give a one-shot characterization of the optimal rate of privacy amplification against classical adversaries in terms of the optimal type-II error in asymmetric hypothesis testing. This formulation can be easily computed to give finite-blocklength bounds and turns out to be equivalent to smooth min-entropy bounds by Renner and Wolf [Asiacrypt 2005] and Watanabe and Hayashi [ISIT 2013], as well as a bound in terms of the EÎłE_\gamma divergence by Yang, Schaefer, and Poor [arXiv:1706.03866 [cs.IT]]. In the latter, we show that protocols for privacy amplification based on linear codes can be easily repurposed for channel simulation. Combined with known relations between channel simulation and lossy source coding, this implies that privacy amplification can be understood as a basic primitive for both channel simulation and lossy compression. Applied to symmetric channels or lossy compression settings, our construction leads to proto- cols of optimal rate in the asymptotic i.i.d. limit. Finally, appealing to the notion of channel duality recently detailed by us in [IEEE Trans. Info. Theory 64, 577 (2018)], we show that linear error-correcting codes for symmetric channels with quantum output can be transformed into linear lossy source coding schemes for classical variables arising from the dual channel. This explains a "curious duality" in these problems for the (self-dual) erasure channel observed by Martinian and Yedidia [Allerton 2003; arXiv:cs/0408008] and partly anticipates recent results on optimal lossy compression by polar and low-density generator matrix codes.Comment: v3: updated to include equivalence of the converse bound with smooth entropy formulations. v2: updated to include comparison with the one-shot bounds of arXiv:1706.03866. v1: 11 pages, 4 figure

    Secure Lossy Source Coding with Side Information at the Decoders

    Full text link
    This paper investigates the problem of secure lossy source coding in the presence of an eavesdropper with arbitrary correlated side informations at the legitimate decoder (referred to as Bob) and the eavesdropper (referred to as Eve). This scenario consists of an encoder that wishes to compress a source to satisfy the desired requirements on: (i) the distortion level at Bob and (ii) the equivocation rate at Eve. It is assumed that the decoders have access to correlated sources as side information. For instance, this problem can be seen as a generalization of the well-known Wyner-Ziv problem taking into account the security requirements. A complete characterization of the rate-distortion-equivocation region for the case of arbitrary correlated side informations at the decoders is derived. Several special cases of interest and an application example to secure lossy source coding of binary sources in the presence of binary and ternary side informations are also considered. It is shown that the statistical differences between the side information at the decoders and the presence of non-zero distortion at the legitimate decoder can be useful in terms of secrecy. Applications of these results arise in a variety of distributed sensor network scenarios.Comment: 7 pages, 5 figures, 1 table, to be presented at Allerton 201

    Lossy joint source-channel coding in the finite blocklength regime

    Get PDF
    This paper finds new tight finite-blocklength bounds for the best achievable lossy joint source-channel code rate, and demonstrates that joint source-channel code design brings considerable performance advantage over a separate one in the non-asymptotic regime. A joint source-channel code maps a block of kk source symbols onto a length−n-n channel codeword, and the fidelity of reproduction at the receiver end is measured by the probability Ï”\epsilon that the distortion exceeds a given threshold dd. For memoryless sources and channels, it is demonstrated that the parameters of the best joint source-channel code must satisfy nC−kR(d)≈nV+kV(d)Q(Ï”)nC - kR(d) \approx \sqrt{nV + k \mathcal V(d)} Q(\epsilon), where CC and VV are the channel capacity and channel dispersion, respectively; R(d)R(d) and V(d)\mathcal V(d) are the source rate-distortion and rate-dispersion functions; and QQ is the standard Gaussian complementary cdf. Symbol-by-symbol (uncoded) transmission is known to achieve the Shannon limit when the source and channel satisfy a certain probabilistic matching condition. In this paper we show that even when this condition is not satisfied, symbol-by-symbol transmission is, in some cases, the best known strategy in the non-asymptotic regime

    Fixed-length lossy compression in the finite blocklength regime

    Get PDF
    This paper studies the minimum achievable source coding rate as a function of blocklength nn and probability Ï”\epsilon that the distortion exceeds a given level dd. Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by R(d)+V(d)nQ−1(Ï”)R(d) + \sqrt{\frac{V(d)}{n}} Q^{-1}(\epsilon), where R(d)R(d) is the rate-distortion function, V(d)V(d) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and Q−1(Ï”)Q^{-1}(\epsilon) is the inverse of the standard Gaussian complementary cdf
    • 

    corecore