25 research outputs found

    On the rate loss and construction of source codes for broadcast channels

    Get PDF
    In this paper, we first define and bound the rate loss of source codes for broadcast channels. Our broadcast channel model comprises one transmitter and two receivers; the transmitter is connected to each receiver by a private channel and to both receivers by a common channel. The transmitter sends a description of source (X, Y) through these channels, receiver 1 reconstructs X with distortion D1, and receiver 2 reconstructs Y with distortion D2. Suppose the rates of the common channel and private channels 1 and 2 are R0, R1, and R2, respectively. The work of Gray and Wyner gives a complete characterization of all achievable rate triples (R0,R1,R2) given any distortion pair (D1,D2). In this paper, we define the rate loss as the gap between the achievable region and the outer bound composed by the rate-distortion functions, i.e., R0+R1+R2 ≄ RX,Y (D1,D2), R0 + R1 ≄ RX(D1), and R0 + R2 ≄ RY (D2). We upper bound the rate loss for general sources by functions of distortions and upper bound the rate loss for Gaussian sources by constants, which implies that though the outer bound is generally not achievable, it may be quite close to the achievable region. This also bounds the gap between the achievable region and the inner bound proposed by Gray and Wyner and bounds the performance penalty associated with using separate decoders rather than joint decoders. We then construct such source codes using entropy-constrained dithered quantizers. The resulting implementation has low complexity and performance close to the theoretical optimum. In particular, the gap between its performance and the theoretical optimum can be bounded from above by constants for Gaussian sources

    Finite Dimension Wyner-Ziv Lattice Coding for Two-Way Relay Channel

    Get PDF
    International audienceTwo-way relay channel (TWRC) models a cooperative communication situation performing duplex transmission via a relay station. For this channel, we have shown previously that a lattice-based physical layer network coding strategy achieves, at the limit of arbitrarily large dimension, the same rate as that offered by the random coding-based regular compress-and-forward. In this paper, we investigate a practical coding scheme using finite dimension lattices and offering a reasonable performance-complexity trade-off. The algorithm relies on lattice based quantization for Wyner-Ziv coding. We characterize the rate region allowed by our coding scheme, discuss the design criteria, and illustrate our results with some numerical examples

    An Achievable Data-Rate Region Subject to a Stationary Performance Constraint for LTI Plants

    Get PDF

    Results on lattice vector quantization with dithering

    Get PDF
    The statistical properties of the error in uniform scalar quantization have been analyzed by a number of authors in the past, and is a well-understood topic today. The analysis has also been extended to the case of dithered quantizers, and the advantages and limitations of dithering have been studied and well documented in the literature. Lattice vector quantization is a natural extension into multiple dimensions of the uniform scalar quantization. Accordingly, there is a natural extension of the analysis of the quantization error. It is the purpose of this paper to present this extension and to elaborate on some of the new aspects that come with multiple dimensions. We show that, analogous to the one-dimensional case, the quantization error vector can be rendered independent of the input in subtractive vector-dithering. In this case, the total mean square error is a function of only the underlying lattice and there are lattices that minimize this error. We give a necessary condition on such lattices. In nonsubtractive vector dithering, we show how to render moments of the error vector independent of the input by using appropriate dither random vectors. These results can readily be applied for the case of wide sense stationary (WSS) vector random processes, by use of iid dither sequences. We consider the problem of pre- and post-filtering around a dithered lattice quantifier, and show how these filters should be designed in order to minimize the overall quantization error in the mean square sense. For the special case where the WSS vector process is obtained by blocking a WSS scalar process, the optimum prefilter matrix reduces to the blocked version of the well-known scalar half-whitening filter

    Universal Quantization for Separate Encodings and Joint Decoding of Correlated Sources

    Full text link
    We consider the multi-user lossy source-coding problem for continuous alphabet sources. In a previous work, Ziv proposed a single-user universal coding scheme which uses uniform quantization with dither, followed by a lossless source encoder (entropy coder). In this paper, we generalize Ziv's scheme to the multi-user setting. For this generalized universal scheme, upper bounds are derived on the redundancies, defined as the differences between the actual rates and the closest corresponding rates on the boundary of the rate region. It is shown that this scheme can achieve redundancies of no more than 0.754 bits per sample for each user. These bounds are obtained without knowledge of the multi-user rate region, which is an open problem in general. As a direct consequence of these results, inner and outer bounds on the rate-distortion achievable region are obtained.Comment: 24 pages, 1 figur

    Multiple-Description Coding by Dithered Delta-Sigma Quantization

    Get PDF
    We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and time-invariant MD coding scheme. We show that the use of a noise shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically as the dimension of the lattice vector quantizer and order of the noise shaping filter approach infinity, the entropy rate of the dithered Delta-Sigma quantization scheme approaches the symmetric two-channel MD rate-distortion function for a memoryless Gaussian source and MSE fidelity criterion, at any side-to-central distortion ratio and any resolution. In the optimal scheme, the infinite-order noise shaping filter must be minimum phase and have a piece-wise flat power spectrum with a single jump discontinuity. An important advantage of the proposed design is that it is symmetric in rate and distortion by construction, so the coding rates of the descriptions are identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has been fixed. Accepted for publication in the IEEE Transactions on Information Theor
    corecore