411 research outputs found

    On rate-distortion with mixed types of side information

    Get PDF
    In this correspondence, we consider rate-distortion examples in the presence of side information. For a system with some side information known at both the encoder and decoder, and some known only at the decoder, we evaluate the rate distortion function for both Gaussian and binary sources. While the Gaussian example is a straightforward generalization of the corresponding result by Wyner, the binary example proves more difficult and is solved using a multidimensional optimization approach. Leveraging the insights gained from the binary example, we then solve the more complicated binary Heegard and Berger problem of decoding when side information may be present. The results demonstrate the existence of a new type of successive refinement in which the refinement information is decoded together with side information that is not available for the initial description

    Joint Wyner-Ziv/Dirty Paper coding by modulo-lattice modulation

    Full text link
    The combination of source coding with decoder side-information (Wyner-Ziv problem) and channel coding with encoder side-information (Gel'fand-Pinsker problem) can be optimally solved using the separation principle. In this work we show an alternative scheme for the quadratic-Gaussian case, which merges source and channel coding. This scheme achieves the optimal performance by a applying modulo-lattice modulation to the analog source. Thus it saves the complexity of quantization and channel decoding, and remains with the task of "shaping" only. Furthermore, for high signal-to-noise ratio (SNR), the scheme approaches the optimal performance using an SNR-independent encoder, thus it is robust to unknown SNR at the encoder.Comment: Submitted to IEEE Transactions on Information Theory. Presented in part in ISIT-2006, Seattle. New version after revie

    Side-information Scalable Source Coding

    Full text link
    The problem of side-information scalable (SI-scalable) source coding is considered in this work, where the encoder constructs a progressive description, such that the receiver with high quality side information will be able to truncate the bitstream and reconstruct in the rate distortion sense, while the receiver with low quality side information will have to receive further data in order to decode. We provide inner and outer bounds for general discrete memoryless sources. The achievable region is shown to be tight for the case that either of the decoders requires a lossless reconstruction, as well as the case with degraded deterministic distortion measures. Furthermore we show that the gap between the achievable region and the outer bounds can be bounded by a constant when square error distortion measure is used. The notion of perfectly scalable coding is introduced as both the stages operate on the Wyner-Ziv bound, and necessary and sufficient conditions are given for sources satisfying a mild support condition. Using SI-scalable coding and successive refinement Wyner-Ziv coding as basic building blocks, a complete characterization is provided for the important quadratic Gaussian source with multiple jointly Gaussian side-informations, where the side information quality does not have to be monotonic along the scalable coding order. Partial result is provided for the doubly symmetric binary source with Hamming distortion when the worse side information is a constant, for which one of the outer bound is strictly tighter than the other one.Comment: 35 pages, submitted to IEEE Transaction on Information Theor

    Multiuser Successive Refinement and Multiple Description Coding

    Full text link
    We consider the multiuser successive refinement (MSR) problem, where the users are connected to a central server via links with different noiseless capacities, and each user wishes to reconstruct in a successive-refinement fashion. An achievable region is given for the two-user two-layer case and it provides the complete rate-distortion region for the Gaussian source under the MSE distortion measure. The key observation is that this problem includes the multiple description (MD) problem (with two descriptions) as a subsystem, and the techniques useful in the MD problem can be extended to this case. We show that the coding scheme based on the universality of random binning is sub-optimal, because multiple Gaussian side informations only at the decoders do incur performance loss, in contrast to the case of single side information at the decoder. We further show that unlike the single user case, when there are multiple users, the loss of performance by a multistage coding approach can be unbounded for the Gaussian source. The result suggests that in such a setting, the benefit of using successive refinement is not likely to justify the accompanying performance loss. The MSR problem is also related to the source coding problem where each decoder has its individual side information, while the encoder has the complete set of the side informations. The MSR problem further includes several variations of the MD problem, for which the specialization of the general result is investigated and the implication is discussed.Comment: 10 pages, 5 figures. To appear in IEEE Transaction on Information Theory. References updated and typos correcte

    Successive Wyner-Ziv Coding Scheme and its Application to the Quadratic Gaussian CEO Problem

    Full text link
    We introduce a distributed source coding scheme called successive Wyner-Ziv coding. We show that any point in the rate region of the quadratic Gaussian CEO problem can be achieved via the successive Wyner-Ziv coding. The concept of successive refinement in the single source coding is generalized to the distributed source coding scenario, which we refer to as distributed successive refinement. For the quadratic Gaussian CEO problem, we establish a necessary and sufficient condition for distributed successive refinement, where the successive Wyner-Ziv coding scheme plays an important role.Comment: 28 pages, submitted to the IEEE Transactions on Information Theor

    DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression

    Full text link
    We propose a new architecture for distributed image compression from a group of distributed data sources. The work is motivated by practical needs of data-driven codec design, low power consumption, robustness, and data privacy. The proposed architecture, which we refer to as Distributed Recurrent Autoencoder for Scalable Image Compression (DRASIC), is able to train distributed encoders and one joint decoder on correlated data sources. Its compression capability is much better than the method of training codecs separately. Meanwhile, the performance of our distributed system with 10 distributed sources is only within 2 dB peak signal-to-noise ratio (PSNR) of the performance of a single codec trained with all data sources. We experiment distributed sources with different correlations and show how our data-driven methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC). To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with deep learning
    • …
    corecore