3,671 research outputs found

    Reduced-Dimension Linear Transform Coding of Correlated Signals in Networks

    Full text link
    A model, called the linear transform network (LTN), is proposed to analyze the compression and estimation of correlated signals transmitted over directed acyclic graphs (DAGs). An LTN is a DAG network with multiple source and receiver nodes. Source nodes transmit subspace projections of random correlated signals by applying reduced-dimension linear transforms. The subspace projections are linearly processed by multiple relays and routed to intended receivers. Each receiver applies a linear estimator to approximate a subset of the sources with minimum mean squared error (MSE) distortion. The model is extended to include noisy networks with power constraints on transmitters. A key task is to compute all local compression matrices and linear estimators in the network to minimize end-to-end distortion. The non-convex problem is solved iteratively within an optimization framework using constrained quadratic programs (QPs). The proposed algorithm recovers as special cases the regular and distributed Karhunen-Loeve transforms (KLTs). Cut-set lower bounds on the distortion region of multi-source, multi-receiver networks are given for linear coding based on convex relaxations. Cut-set lower bounds are also given for any coding strategy based on information theory. The distortion region and compression-estimation tradeoffs are illustrated for different communication demands (e.g. multiple unicast), and graph structures.Comment: 33 pages, 7 figures, To appear in IEEE Transactions on Signal Processin

    Nonasymptotic noisy lossy source coding

    Get PDF
    This paper shows new general nonasymptotic achievability and converse bounds and performs their dispersion analysis for the lossy compression problem in which the compressor observes the source through a noisy channel. While this problem is asymptotically equivalent to a noiseless lossy source coding problem with a modified distortion function, nonasymptotically there is a noticeable gap in how fast their minimum achievable coding rates approach the common rate-distortion function, as evidenced both by the refined asymptotic analysis (dispersion) and the numerical results. The size of the gap between the dispersions of the noisy problem and the asymptotically equivalent noiseless problem depends on the stochastic variability of the channel through which the compressor observes the source.Comment: IEEE Transactions on Information Theory, 201

    On privacy amplification, lossy compression, and their duality to channel coding

    Full text link
    We examine the task of privacy amplification from information-theoretic and coding-theoretic points of view. In the former, we give a one-shot characterization of the optimal rate of privacy amplification against classical adversaries in terms of the optimal type-II error in asymmetric hypothesis testing. This formulation can be easily computed to give finite-blocklength bounds and turns out to be equivalent to smooth min-entropy bounds by Renner and Wolf [Asiacrypt 2005] and Watanabe and Hayashi [ISIT 2013], as well as a bound in terms of the EγE_\gamma divergence by Yang, Schaefer, and Poor [arXiv:1706.03866 [cs.IT]]. In the latter, we show that protocols for privacy amplification based on linear codes can be easily repurposed for channel simulation. Combined with known relations between channel simulation and lossy source coding, this implies that privacy amplification can be understood as a basic primitive for both channel simulation and lossy compression. Applied to symmetric channels or lossy compression settings, our construction leads to proto- cols of optimal rate in the asymptotic i.i.d. limit. Finally, appealing to the notion of channel duality recently detailed by us in [IEEE Trans. Info. Theory 64, 577 (2018)], we show that linear error-correcting codes for symmetric channels with quantum output can be transformed into linear lossy source coding schemes for classical variables arising from the dual channel. This explains a "curious duality" in these problems for the (self-dual) erasure channel observed by Martinian and Yedidia [Allerton 2003; arXiv:cs/0408008] and partly anticipates recent results on optimal lossy compression by polar and low-density generator matrix codes.Comment: v3: updated to include equivalence of the converse bound with smooth entropy formulations. v2: updated to include comparison with the one-shot bounds of arXiv:1706.03866. v1: 11 pages, 4 figure

    Algorithmic Aspects of Optimal Channel Coding

    Full text link
    A central question in information theory is to determine the maximum success probability that can be achieved in sending a fixed number of messages over a noisy channel. This was first studied in the pioneering work of Shannon who established a simple expression characterizing this quantity in the limit of multiple independent uses of the channel. Here we consider the general setting with only one use of the channel. We observe that the maximum success probability can be expressed as the maximum value of a submodular function. Using this connection, we establish the following results: 1. There is a simple greedy polynomial-time algorithm that computes a code achieving a (1-1/e)-approximation of the maximum success probability. Moreover, for this problem it is NP-hard to obtain an approximation ratio strictly better than (1-1/e). 2. Shared quantum entanglement between the sender and the receiver can increase the success probability by a factor of at most 1/(1-1/e). In addition, this factor is tight if one allows an arbitrary non-signaling box between the sender and the receiver. 3. We give tight bounds on the one-shot performance of the meta-converse of Polyanskiy-Poor-Verdu.Comment: v2: 16 pages. Added alternate proof of main result with random codin
    • …
    corecore