290 research outputs found

    Side-information Scalable Source Coding

    Full text link
    The problem of side-information scalable (SI-scalable) source coding is considered in this work, where the encoder constructs a progressive description, such that the receiver with high quality side information will be able to truncate the bitstream and reconstruct in the rate distortion sense, while the receiver with low quality side information will have to receive further data in order to decode. We provide inner and outer bounds for general discrete memoryless sources. The achievable region is shown to be tight for the case that either of the decoders requires a lossless reconstruction, as well as the case with degraded deterministic distortion measures. Furthermore we show that the gap between the achievable region and the outer bounds can be bounded by a constant when square error distortion measure is used. The notion of perfectly scalable coding is introduced as both the stages operate on the Wyner-Ziv bound, and necessary and sufficient conditions are given for sources satisfying a mild support condition. Using SI-scalable coding and successive refinement Wyner-Ziv coding as basic building blocks, a complete characterization is provided for the important quadratic Gaussian source with multiple jointly Gaussian side-informations, where the side information quality does not have to be monotonic along the scalable coding order. Partial result is provided for the doubly symmetric binary source with Hamming distortion when the worse side information is a constant, for which one of the outer bound is strictly tighter than the other one.Comment: 35 pages, submitted to IEEE Transaction on Information Theor

    Multiuser Successive Refinement and Multiple Description Coding

    Full text link
    We consider the multiuser successive refinement (MSR) problem, where the users are connected to a central server via links with different noiseless capacities, and each user wishes to reconstruct in a successive-refinement fashion. An achievable region is given for the two-user two-layer case and it provides the complete rate-distortion region for the Gaussian source under the MSE distortion measure. The key observation is that this problem includes the multiple description (MD) problem (with two descriptions) as a subsystem, and the techniques useful in the MD problem can be extended to this case. We show that the coding scheme based on the universality of random binning is sub-optimal, because multiple Gaussian side informations only at the decoders do incur performance loss, in contrast to the case of single side information at the decoder. We further show that unlike the single user case, when there are multiple users, the loss of performance by a multistage coding approach can be unbounded for the Gaussian source. The result suggests that in such a setting, the benefit of using successive refinement is not likely to justify the accompanying performance loss. The MSR problem is also related to the source coding problem where each decoder has its individual side information, while the encoder has the complete set of the side informations. The MSR problem further includes several variations of the MD problem, for which the specialization of the general result is investigated and the implication is discussed.Comment: 10 pages, 5 figures. To appear in IEEE Transaction on Information Theory. References updated and typos correcte

    Integer-Forcing Source Coding

    Full text link
    Integer-Forcing (IF) is a new framework, based on compute-and-forward, for decoding multiple integer linear combinations from the output of a Gaussian multiple-input multiple-output channel. This work applies the IF approach to arrive at a new low-complexity scheme, IF source coding, for distributed lossy compression of correlated Gaussian sources under a minimum mean squared error distortion measure. All encoders use the same nested lattice codebook. Each encoder quantizes its observation using the fine lattice as a quantizer and reduces the result modulo the coarse lattice, which plays the role of binning. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. In general, the linear combinations have smaller average powers than the original signals. This allows to increase the density of the coarse lattice, which in turn translates to smaller compression rates. We also propose and analyze a one-shot version of IF source coding, that is simple enough to potentially lead to a new design principle for analog-to-digital converters that can exploit spatial correlations between the sampled signals.Comment: Submitted to IEEE Transactions on Information Theor

    On the rate loss and construction of source codes for broadcast channels

    Get PDF
    In this paper, we first define and bound the rate loss of source codes for broadcast channels. Our broadcast channel model comprises one transmitter and two receivers; the transmitter is connected to each receiver by a private channel and to both receivers by a common channel. The transmitter sends a description of source (X, Y) through these channels, receiver 1 reconstructs X with distortion D1, and receiver 2 reconstructs Y with distortion D2. Suppose the rates of the common channel and private channels 1 and 2 are R0, R1, and R2, respectively. The work of Gray and Wyner gives a complete characterization of all achievable rate triples (R0,R1,R2) given any distortion pair (D1,D2). In this paper, we define the rate loss as the gap between the achievable region and the outer bound composed by the rate-distortion functions, i.e., R0+R1+R2 ≥ RX,Y (D1,D2), R0 + R1 ≥ RX(D1), and R0 + R2 ≥ RY (D2). We upper bound the rate loss for general sources by functions of distortions and upper bound the rate loss for Gaussian sources by constants, which implies that though the outer bound is generally not achievable, it may be quite close to the achievable region. This also bounds the gap between the achievable region and the inner bound proposed by Gray and Wyner and bounds the performance penalty associated with using separate decoders rather than joint decoders. We then construct such source codes using entropy-constrained dithered quantizers. The resulting implementation has low complexity and performance close to the theoretical optimum. In particular, the gap between its performance and the theoretical optimum can be bounded from above by constants for Gaussian sources
    • …
    corecore