1,203 research outputs found

    Polar Codes are Optimal for Lossy Source Coding

    Get PDF
    We consider lossy source compression of a binary symmetric source using polar codes and the low-complexity successive encoding algorithm. It was recently shown by Arikan that polar codes achieve the capacity of arbitrary symmetric binary-input discrete memoryless channels under a successive decoding strategy. We show the equivalent result for lossy source compression, i.e., we show that this combination achieves the rate-distortion bound for a binary symmetric source. We further show the optimality of polar codes for various problems including the binary Wyner-Ziv and the binary Gelfand-Pinsker problemComment: 15 pages, submitted to Transactions on Information Theor

    Polar codes and polar lattices for the Heegard-Berger problem

    Get PDF
    Explicit coding schemes are proposed to achieve the rate-distortion function of the Heegard-Berger problem using polar codes. Specifically, a nested polar code construction is employed to achieve the rate-distortion function for doublysymmetric binary sources when the side information may be absent. The nested structure contains two optimal polar codes for lossy source coding and channel coding, respectively. Moreover, a similar nested polar lattice construction is employed when the source and the side information are jointly Gaussian. The proposed polar lattice is constructed by nesting a quantization polar lattice and a capacity-achieving polar lattice for the additive white Gaussian noise channel

    Lossy Compression with Privacy Constraints: Optimality of Polar Codes

    Full text link
    A lossy source coding problem with privacy constraint is studied in which two correlated discrete sources XX and YY are compressed into a reconstruction X^\hat{X} with some prescribed distortion DD. In addition, a privacy constraint is specified as the equivocation between the lossy reconstruction X^\hat{X} and YY. This models the situation where a certain amount of source information from one user is provided as utility (given by the fidelity of its reconstruction) to another user or the public, while some other correlated part of the source information YY must be kept private. In this work, we show that polar codes are able, possibly with the aid of time sharing, to achieve any point in the optimal rate-distortion-equivocation region identified by Yamamoto, thus providing a constructive scheme that obtains the optimal tradeoff between utility and privacy in this framework.Comment: Submitted for publicatio

    On privacy amplification, lossy compression, and their duality to channel coding

    Full text link
    We examine the task of privacy amplification from information-theoretic and coding-theoretic points of view. In the former, we give a one-shot characterization of the optimal rate of privacy amplification against classical adversaries in terms of the optimal type-II error in asymmetric hypothesis testing. This formulation can be easily computed to give finite-blocklength bounds and turns out to be equivalent to smooth min-entropy bounds by Renner and Wolf [Asiacrypt 2005] and Watanabe and Hayashi [ISIT 2013], as well as a bound in terms of the EÎłE_\gamma divergence by Yang, Schaefer, and Poor [arXiv:1706.03866 [cs.IT]]. In the latter, we show that protocols for privacy amplification based on linear codes can be easily repurposed for channel simulation. Combined with known relations between channel simulation and lossy source coding, this implies that privacy amplification can be understood as a basic primitive for both channel simulation and lossy compression. Applied to symmetric channels or lossy compression settings, our construction leads to proto- cols of optimal rate in the asymptotic i.i.d. limit. Finally, appealing to the notion of channel duality recently detailed by us in [IEEE Trans. Info. Theory 64, 577 (2018)], we show that linear error-correcting codes for symmetric channels with quantum output can be transformed into linear lossy source coding schemes for classical variables arising from the dual channel. This explains a "curious duality" in these problems for the (self-dual) erasure channel observed by Martinian and Yedidia [Allerton 2003; arXiv:cs/0408008] and partly anticipates recent results on optimal lossy compression by polar and low-density generator matrix codes.Comment: v3: updated to include equivalence of the converse bound with smooth entropy formulations. v2: updated to include comparison with the one-shot bounds of arXiv:1706.03866. v1: 11 pages, 4 figure

    Lossy Compression of Exponential and Laplacian Sources using Expansion Coding

    Full text link
    A general method of source coding over expansion is proposed in this paper, which enables one to reduce the problem of compressing an analog (continuous-valued source) to a set of much simpler problems, compressing discrete sources. Specifically, the focus is on lossy compression of exponential and Laplacian sources, which is subsequently expanded using a finite alphabet prior to being quantized. Due to decomposability property of such sources, the resulting random variables post expansion are independent and discrete. Thus, each of the expanded levels corresponds to an independent discrete source coding problem, and the original problem is reduced to coding over these parallel sources with a total distortion constraint. Any feasible solution to the optimization problem is an achievable rate distortion pair of the original continuous-valued source compression problem. Although finding the solution to this optimization problem at every distortion is hard, we show that our expansion coding scheme presents a good solution in the low distrotion regime. Further, by adopting low-complexity codes designed for discrete source coding, the total coding complexity can be tractable in practice.Comment: 8 pages, 3 figure

    Sparse Regression Codes for Multi-terminal Source and Channel Coding

    Full text link
    We study a new class of codes for Gaussian multi-terminal source and channel coding. These codes are designed using the statistical framework of high-dimensional linear regression and are called Sparse Superposition or Sparse Regression codes. Codewords are linear combinations of subsets of columns of a design matrix. These codes were recently introduced by Barron and Joseph and shown to achieve the channel capacity of AWGN channels with computationally feasible decoding. They have also recently been shown to achieve the optimal rate-distortion function for Gaussian sources. In this paper, we demonstrate how to implement random binning and superposition coding using sparse regression codes. In particular, with minimum-distance encoding/decoding it is shown that sparse regression codes attain the optimal information-theoretic limits for a variety of multi-terminal source and channel coding problems.Comment: 9 pages, appeared in the Proceedings of the 50th Annual Allerton Conference on Communication, Control, and Computing - 201
    • …
    corecore