2,656 research outputs found

    A partial solution for lossless source coding with coded side information

    Get PDF
    This paper considers the problem, first introduced by Ahlswede and Körner in 1975, of lossless source coding with coded side information. Specifically, let X and Y be two random variables such that X is desired losslessly at the decoder while Y serves as side information. The random variables are encoded independently, and both descriptions are used by the decoder to reconstruct X. Ahlswede and Körner describe the achievable rate region in terms of an auxiliary random variable. This paper gives a partial solution for the optimal auxiliary random variable, thereby describing part of the rate region explicitly in terms of the distribution of X and Y

    On some new approaches to practical Slepian-Wolf compression inspired by channel coding

    Get PDF
    This paper considers the problem, first introduced by Ahlswede and Körner in 1975, of lossless source coding with coded side information. Specifically, let X and Y be two random variables such that X is desired losslessly at the decoder while Y serves as side information. The random variables are encoded independently, and both descriptions are used by the decoder to reconstruct X. Ahlswede and Körner describe the achievable rate region in terms of an auxiliary random variable. This paper gives a partial solution for the optimal auxiliary random variable, thereby describing part of the rate region explicitly in terms of the distribution of X and Y

    On Lossless Coding With Coded Side Information

    Get PDF
    This paper considers the problem, first introduced by Ahlswede and Korner in 1975, of lossless source coding with coded side information. Specifically, let X and Y be two random variables such that X is desired losslessly at the decoder while Y serves as side information. The random variables are encoded independently, and both descriptions are used by the decoder to reconstruct X. Ahlswede and Korner describe the achievable rate region in terms of an auxiliary random variable. This paper gives a partial solution for an optimal auxiliary random variable, thereby describing part of the rate region explicitly in terms of the distribution of X and Y

    Interference management via capacity-achieving codes for the deterministic broadcast channel

    Get PDF
    This paper considers the problem, first introduced by Ahlswede and Körner in 1975, of lossless source coding with coded side information. Specifically, let X and Y be two random variables such that X is desired losslessly at the decoder while Y serves as side information. The random variables are encoded independently, and both descriptions are used by the decoder to reconstruct X. Ahlswede and Körner describe the achievable rate region in terms of an auxiliary random variable. This paper gives a partial solution for the optimal auxiliary random variable, thereby describing part of the rate region explicitly in terms of the distribution of X and Y

    Malleable coding for updatable cloud caching

    Full text link
    In software-as-a-service applications provisioned through cloud computing, locally cached data are often modified with updates from new versions. In some cases, with each edit, one may want to preserve both the original and new versions. In this paper, we focus on cases in which only the latest version must be preserved. Furthermore, it is desirable for the data to not only be compressed but to also be easily modified during updates, since representing information and modifying the representation both incur cost. We examine whether it is possible to have both compression efficiency and ease of alteration, in order to promote codeword reuse. In other words, we study the feasibility of a malleable and efficient coding scheme. The tradeoff between compression efficiency and malleability cost-the difficulty of synchronizing compressed versions-is measured as the length of a reused prefix portion. The region of achievable rates and malleability is found. Drawing from prior work on common information problems, we show that efficient data compression may not be the best engineering design principle when storing software-as-a-service data. In the general case, goals of efficiency and malleability are fundamentally in conflict.This work was supported in part by an NSF Graduate Research Fellowship (LRV), Grant CCR-0325774, and Grant CCF-0729069. This work was presented at the 2011 IEEE International Symposium on Information Theory [1] and the 2014 IEEE International Conference on Cloud Engineering [2]. The associate editor coordinating the review of this paper and approving it for publication was R. Thobaben. (CCR-0325774 - NSF Graduate Research Fellowship; CCF-0729069 - NSF Graduate Research Fellowship)Accepted manuscrip

    Malleable Coding with Fixed Reuse

    Full text link
    In cloud computing, storage area networks, remote backup storage, and similar settings, stored data is modified with updates from new versions. Representing information and modifying the representation are both expensive. Therefore it is desirable for the data to not only be compressed but to also be easily modified during updates. A malleable coding scheme considers both compression efficiency and ease of alteration, promoting codeword reuse. We examine the trade-off between compression efficiency and malleability cost-the difficulty of synchronizing compressed versions-measured as the length of a reused prefix portion. Through a coding theorem, the region of achievable rates and malleability is expressed as a single-letter optimization. Relationships to common information problems are also described

    On Approximating the Rate Region for Source Coding with Coded Side Information

    Get PDF
    The achievable rate region for the problem of lossless source coding with coded side information was derived by Ahlswede and Körner in 1975. While the Ahlswede-Körner bound completely characterizes the achievable rate region when the source and side information are memoryless, calculating this bound for a given memoryless joint probability mass function on the source and side information requires an optimization over all possible auxiliary random variables meeting a given Markov condition and alphabet size constraint. This optimization turns out to be surprisingly difficult even for very simple distributions on the source and side information. We here propose a (1 + ε)-approximation algorithm for the given rate region. The proposed technique involves quantization of a space of conditional distributions followed by linear programming. The resulting algorithm guarantees performance within a multiplicative factor (1 + ε) of the optimal performance - even when that optimal performance is unknown

    Optimal code design for lossless and near lossless source coding in multiple access networks

    Get PDF
    A multiple access source code (MASC) is a source code designed for the following network configuration: a pair of correlated information sequences {Xi}i=1∞ and {Yi }i=1∞ is drawn i.i.d. according to the joint probability mass function (p.m.f.) p(x,y); the encoder for each source operates without knowledge of the other source; the decoder jointly decodes the encoded bit streams from both sources. The work of Slepian and Wolf (1973) describes all rates achievable by MASCs with arbitrarily small but non-zero error probabilities but does not address truly lossless coding or code design. We consider practical code design for lossless and near lossless MASCs. We generalize the Huffman and arithmetic code design algorithms to attain the corresponding optimal MASC codes for arbitrary p.m.f. p(x,y). Experimental results comparing the optimal achievable rate region to the Slepian-Wolf region are included

    On Source Coding with Coded Side Information for a Binary Source with Binary Side Information

    Get PDF
    The lossless rate region for the coded side information problem is "solved," but its solution is expressed in terms of an auxiliary random variable. As a result, finding the rate region for any fixed example requires an optimization over a family of allowed auxiliary random variables. While intuitive constructions are easy to come by and optimal solutions are known under some special conditions, proving the optimal solution is surprisingly difficult even for examples as basic as a binary source with binary side information. We derive the optimal auxiliary random variables and corresponding achievable rate regions for a family of problems where both the source and side information are binary. Our solution involves first tightening known bounds on the alphabet size of the auxiliary random variable and then optimizing the auxiliary random variable subject to this constraint. The technique used to tighten the bound on the alphabet size applies to a variety of problems beyond the one studied here

    On feedback in network source coding

    Get PDF
    We consider source coding over networks with unlimited feedback from the sinks to the sources. We first show examples of networks where the rate region with feedback is a strict superset of that without feedback. Next, we find an achievable region for multiterminal lossy source coding with feedback. Finally, we evaluate this region for the case when one of the sources is fully known at the decoder and use the result to show that this region is a strict superset of the best known achievable region for the problem without feedback
    corecore