19 research outputs found

    On Multistage Successive Refinement for Wyner-Ziv Source Coding with Degraded Side Informations

    Get PDF
    We provide a complete characterization of the rate-distortion region for the multistage successive refinement of the Wyner-Ziv source coding problem with degraded side informations at the decoder. Necessary and sufficient conditions for a source to be successively refinable along a distortion vector are subsequently derived. A source-channel separation theorem is provided when the descriptions are sent over independent channels for the multistage case. Furthermore, we introduce the notion of generalized successive refinability with multiple degraded side informations. This notion captures whether progressive encoding to satisfy multiple distortion constraints for different side informations is as good as encoding without progressive requirement. Necessary and sufficient conditions for generalized successive refinability are given. It is shown that the following two sources are generalized successively refinable: (1) the Gaussian source with degraded Gaussian side informations, (2) the doubly symmetric binary source when the worse side information is a constant. Thus for both cases, the failure of being successively refinable is only due to the inherent uncertainty on which side information will occur at the decoder, but not the progressive encoding requirement.Comment: Submitted to IEEE Trans. Information Theory Apr. 200

    Side-information Scalable Source Coding

    Full text link
    The problem of side-information scalable (SI-scalable) source coding is considered in this work, where the encoder constructs a progressive description, such that the receiver with high quality side information will be able to truncate the bitstream and reconstruct in the rate distortion sense, while the receiver with low quality side information will have to receive further data in order to decode. We provide inner and outer bounds for general discrete memoryless sources. The achievable region is shown to be tight for the case that either of the decoders requires a lossless reconstruction, as well as the case with degraded deterministic distortion measures. Furthermore we show that the gap between the achievable region and the outer bounds can be bounded by a constant when square error distortion measure is used. The notion of perfectly scalable coding is introduced as both the stages operate on the Wyner-Ziv bound, and necessary and sufficient conditions are given for sources satisfying a mild support condition. Using SI-scalable coding and successive refinement Wyner-Ziv coding as basic building blocks, a complete characterization is provided for the important quadratic Gaussian source with multiple jointly Gaussian side-informations, where the side information quality does not have to be monotonic along the scalable coding order. Partial result is provided for the doubly symmetric binary source with Hamming distortion when the worse side information is a constant, for which one of the outer bound is strictly tighter than the other one.Comment: 35 pages, submitted to IEEE Transaction on Information Theor

    Multiuser Successive Refinement and Multiple Description Coding

    Full text link
    We consider the multiuser successive refinement (MSR) problem, where the users are connected to a central server via links with different noiseless capacities, and each user wishes to reconstruct in a successive-refinement fashion. An achievable region is given for the two-user two-layer case and it provides the complete rate-distortion region for the Gaussian source under the MSE distortion measure. The key observation is that this problem includes the multiple description (MD) problem (with two descriptions) as a subsystem, and the techniques useful in the MD problem can be extended to this case. We show that the coding scheme based on the universality of random binning is sub-optimal, because multiple Gaussian side informations only at the decoders do incur performance loss, in contrast to the case of single side information at the decoder. We further show that unlike the single user case, when there are multiple users, the loss of performance by a multistage coding approach can be unbounded for the Gaussian source. The result suggests that in such a setting, the benefit of using successive refinement is not likely to justify the accompanying performance loss. The MSR problem is also related to the source coding problem where each decoder has its individual side information, while the encoder has the complete set of the side informations. The MSR problem further includes several variations of the MD problem, for which the specialization of the general result is investigated and the implication is discussed.Comment: 10 pages, 5 figures. To appear in IEEE Transaction on Information Theory. References updated and typos correcte

    Source Coding Problems with Conditionally Less Noisy Side Information

    Full text link
    A computable expression for the rate-distortion (RD) function proposed by Heegard and Berger has eluded information theory for nearly three decades. Heegard and Berger's single-letter achievability bound is well known to be optimal for \emph{physically degraded} side information; however, it is not known whether the bound is optimal for arbitrarily correlated side information (general discrete memoryless sources). In this paper, we consider a new setup in which the side information at one receiver is \emph{conditionally less noisy} than the side information at the other. The new setup includes degraded side information as a special case, and it is motivated by the literature on degraded and less noisy broadcast channels. Our key contribution is a converse proving the optimality of Heegard and Berger's achievability bound in a new setting. The converse rests upon a certain \emph{single-letterization} lemma, which we prove using an information theoretic telescoping identity {recently presented by Kramer}. We also generalise the above ideas to two different successive-refinement problems

    Successive refinement with conditionally less noisy side information

    Full text link
    We consider the successive refinement of information problem with decoder side information. The rate-distortion region is unknown in general; Steinberg &amp; Merhav and Tian &amp; Diggavi solved it in the special case of degraded side information. We extend this special case to a new setup, conditionally less noisy side information, and we give a single-letter solution when one distortion function is deterministic.QC 20140108</p

    Wyner-Ziv Coding over Broadcast Channels: Digital Schemes

    Full text link
    This paper addresses lossy transmission of a common source over a broadcast channel when there is correlated side information at the receivers, with emphasis on the quadratic Gaussian and binary Hamming cases. A digital scheme that combines ideas from the lossless version of the problem, i.e., Slepian-Wolf coding over broadcast channels, and dirty paper coding, is presented and analyzed. This scheme uses layered coding where the common layer information is intended for both receivers and the refinement information is destined only for one receiver. For the quadratic Gaussian case, a quantity characterizing the overall quality of each receiver is identified in terms of channel and side information parameters. It is shown that it is more advantageous to send the refinement information to the receiver with "better" overall quality. In the case where all receivers have the same overall quality, the presented scheme becomes optimal. Unlike its lossless counterpart, however, the problem eludes a complete characterization

    Rate-Distortion Region of a Gray–Wyner Model with Side Information

    Get PDF
    In this work, we establish a full single-letter characterization of the rate-distortion region of an instance of the Gray–Wyner model with side information at the decoders. Specifically, in this model, an encoder observes a pair of memoryless, arbitrarily correlated, sources (Sn1,Sn2) and communicates with two receivers over an error-free rate-limited link of capacity R0 , as well as error-free rate-limited individual links of capacities R1 to the first receiver and R2 to the second receiver. Both receivers reproduce the source component Sn2 losslessly; and Receiver 1 also reproduces the source component Sn1 lossily, to within some prescribed fidelity level D1 . In addition, Receiver 1 and Receiver 2 are equipped, respectively, with memoryless side information sequences Yn1 and Yn2 . Important in this setup, the side information sequences are arbitrarily correlated among them, and with the source pair (Sn1,Sn2) ; and are not assumed to exhibit any particular ordering. Furthermore, by specializing the main result to two Heegard–Berger models with successive refinement and scalable coding, we shed light on the roles of the common and private descriptions that the encoder should produce and the role of each of the common and private links. We develop intuitions by analyzing the developed single-letter rate-distortion regions of these models, and discuss some insightful binary examples
    corecore