2,735 research outputs found

    On Optimum Conventional Quantization for Source Coding with Side Information at the Decoder

    Get PDF
    In many scenarios, side information naturally exists in point-to-point communications. Although side information can be present in the encoder and/or decoder and thus yield several cases, the most important case that worths particular attention is source coding with side information at the decoder (Wyner-Ziv coding) which requires different design strategies compared to the the conventional source coding problem. Due to the difficulty caused by the joint design of random variable and reconstruction function, a common approach to this lossy source coding problem is to apply conventional vector quantization followed by Slepian-Wolf coding. In this thesis, we investigate the best rate-distortion performance achievable asymptotically by practical Wyner-Ziv coding schemes of the above approach from an information theoretic viewpoint and a numerical computation viewpoint respectively.From the information theoretic viewpoint, we establish the corresponding rate-distortion function R^WZ(D)\hat{R}_{WZ}(D) for any memoryless pair (X,Y)(X,Y) and any distortion measure. Given an arbitrary single letter distortion measure dd, it is shown that the best rate achievable asymptotically under the constraint that XX is recovered with distortion level no greater than D0D \geq 0 is R^WZ(D)=minX^[I(X;X^)I(Y;X^)]\hat{R}_{WZ}(D) = \min_{\hat{X}} [I(X; \hat{X}) - I(Y; \hat{X})], where the minimum is taken over all auxiliary random variables X^\hat{X} such that Ed(X,X^)DEd(X, \hat{X}) \leq D and X^XY\hat{X}\to X \to Y is a Markov chain.Further, we are interested in designing practical Wyner-Ziv coding. With the characterization at R^WZ(D)\hat{R}_{WZ}(D), this reduces to investigating X^\hat{X}. Then from the viewpoint of numerical computation, the extended Blahut-Arimoto algorithm is proposed to study the rate-distortion performance, as well as determine the random variable X^\hat{X} that achieves R^WZ(D)\hat{R}_{WZ}(D) which provids guidelines for designing practical Wyner-Ziv coding.In most cases, the random variable X^\hat{X} that achieves R^WZ(D)\hat{R}_{WZ}(D) is different from the random variable X^\hat{X}' that achieves the classical rate-distortion R(D)R(D) without side information at the decoder. Interestingly, the extended Blahut-Arimoto algorithm allows us to observe an interesting phenomenon, that is, there are indeed cases where X^=X^\hat{X} = \hat{X}'. To gain deep insights of the quantizer's design problem between practical Wyner-Ziv coding and classic rate-distortion coding schemes, we give a mathematic proof to show under what conditions the two random quantizers are equivalent or distinct. We completely settle this problem for the case where X{\cal X}, Y{\cal Y}, and X^\hat{\cal X} are all binary with Hamming distortion measure.We also determine sufficient conditions (equivalent condition) for non-binary alphabets with Hamming distortion measure case and Gaussian source with mean-squared error distortion measure case respectively

    Sequential coding of Gauss-Markov sources with packet erasures and feedback

    Get PDF
    We consider the problem of sequential transmission of Gauss-Markov sources. We show that in the limit of large spatial block lengths, greedy compression with respect to the squared error distortion is optimal; that is, there is no tension between optimizing the distortion of the source in the current time instant and that of future times. We then extend this result to the case where at time t a random compression rate rt is allocated independently of the rate at other time instants. This, in turn, allows us to derive the optimal performance of sequential coding over packet-erasure channels with instantaneous feedback. For the case of packet erasures with delayed feedback, we connect the problem to that of compression with side information that is known at the encoder and may be known at the decoder — where the most recent packets serve as side information that may have been erased, and demonstrate that the loss due to a delay by one time unit is rather small

    Sequential coding of Gauss-Markov sources with packet erasures and feedback

    Get PDF
    We consider the problem of sequential transmission of Gauss-Markov sources. We show that in the limit of large spatial block lengths, greedy compression with respect to the squared error distortion is optimal; that is, there is no tension between optimizing the distortion of the source in the current time instant and that of future times. We then extend this result to the case where at time t a random compression rate rt is allocated independently of the rate at other time instants. This, in turn, allows us to derive the optimal performance of sequential coding over packet-erasure channels with instantaneous feedback. For the case of packet erasures with delayed feedback, we connect the problem to that of compression with side information that is known at the encoder and may be known at the decoder — where the most recent packets serve as side information that may have been erased, and demonstrate that the loss due to a delay by one time unit is rather small

    Second-Order Coding Rates for Conditional Rate-Distortion

    Full text link
    This paper characterizes the second-order coding rates for lossy source coding with side information available at both the encoder and the decoder. We first provide non-asymptotic bounds for this problem and then specialize the non-asymptotic bounds for three different scenarios: discrete memoryless sources, Gaussian sources, and Markov sources. We obtain the second-order coding rates for these settings. It is interesting to observe that the second-order coding rate for Gaussian source coding with Gaussian side information available at both the encoder and the decoder is the same as that for Gaussian source coding without side information. Furthermore, regardless of the variance of the side information, the dispersion is 1/21/2 nats squared per source symbol.Comment: 20 pages, 2 figures, second-order coding rates, finite blocklength, network information theor

    Side-information Scalable Source Coding

    Full text link
    The problem of side-information scalable (SI-scalable) source coding is considered in this work, where the encoder constructs a progressive description, such that the receiver with high quality side information will be able to truncate the bitstream and reconstruct in the rate distortion sense, while the receiver with low quality side information will have to receive further data in order to decode. We provide inner and outer bounds for general discrete memoryless sources. The achievable region is shown to be tight for the case that either of the decoders requires a lossless reconstruction, as well as the case with degraded deterministic distortion measures. Furthermore we show that the gap between the achievable region and the outer bounds can be bounded by a constant when square error distortion measure is used. The notion of perfectly scalable coding is introduced as both the stages operate on the Wyner-Ziv bound, and necessary and sufficient conditions are given for sources satisfying a mild support condition. Using SI-scalable coding and successive refinement Wyner-Ziv coding as basic building blocks, a complete characterization is provided for the important quadratic Gaussian source with multiple jointly Gaussian side-informations, where the side information quality does not have to be monotonic along the scalable coding order. Partial result is provided for the doubly symmetric binary source with Hamming distortion when the worse side information is a constant, for which one of the outer bound is strictly tighter than the other one.Comment: 35 pages, submitted to IEEE Transaction on Information Theor

    Rate-Distortion Function for a Heegard-Berger Problem with Two Sources and Degraded Reconstruction sets

    Full text link
    In this work, we investigate an instance of the Heegard-Berger problem with two sources and arbitrarily correlated side information sequences at two decoders, in which the reconstruction sets at the decoders are degraded. Specifically, two sources are to be encoded in a manner that one of the two is reproduced losslessly by both decoders, and the other is reproduced to within some prescribed distortion level at one of the two decoders. We establish a single-letter characterization of the rate-distortion function for this model. The investigation of this result in some special cases also sheds light on the utility of joint compression of the two sources. Furthermore, we also generalize our result to the setting in which the source component that is to be recovered by both users is reconstructed in a lossy fashion, under the requirement that all terminals (i.e., the encoder and both decoders) can share an exact copy of the compressed version of this source component, i.e., a common encoder-decoders reconstruction constraint. For this model as well, we establish a single-letter characterization of the associated rate-distortion function.Comment: Submitted to IEEE Trans. on Information Theor

    Lossy Source Coding with Reconstruction Privacy

    Full text link
    We consider the problem of lossy source coding with side information under a privacy constraint that the reconstruction sequence at a decoder should be kept secret to a certain extent from another terminal such as an eavesdropper, a sender, or a helper. We are interested in how the reconstruction privacy constraint at a particular terminal affects the rate-distortion tradeoff. In this work, we allow the decoder to use a random mapping, and give inner and outer bounds to the rate-distortion-equivocation region for different cases where the side information is available non-causally and causally at the decoder. In the special case where each reconstruction symbol depends only on the source description and current side information symbol, the complete rate-distortion-equivocation region is provided. A binary example illustrating a new tradeoff due to the new privacy constraint, and a gain from the use of a stochastic decoder is given.Comment: 22 pages, added proofs, to be presented at ISIT 201
    corecore