9 research outputs found

    Multiple Description Quantization via Gram-Schmidt Orthogonalization

    Full text link
    The multiple description (MD) problem has received considerable attention as a model of information transmission over unreliable channels. A general framework for designing efficient multiple description quantization schemes is proposed in this paper. We provide a systematic treatment of the El Gamal-Cover (EGC) achievable MD rate-distortion region, and show that any point in the EGC region can be achieved via a successive quantization scheme along with quantization splitting. For the quadratic Gaussian case, the proposed scheme has an intrinsic connection with the Gram-Schmidt orthogonalization, which implies that the whole Gaussian MD rate-distortion region is achievable with a sequential dithered lattice-based quantization scheme as the dimension of the (optimal) lattice quantizers becomes large. Moreover, this scheme is shown to be universal for all i.i.d. smooth sources with performance no worse than that for an i.i.d. Gaussian source with the same variance and asymptotically optimal at high resolution. A class of low-complexity MD scalar quantizers in the proposed general framework also is constructed and is illustrated geometrically; the performance is analyzed in the high resolution regime, which exhibits a noticeable improvement over the existing MD scalar quantization schemes.Comment: 48 pages; submitted to IEEE Transactions on Information Theor

    Multiple-Description Coding by Dithered Delta-Sigma Quantization

    Get PDF
    We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and time-invariant MD coding scheme. We show that the use of a noise shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically as the dimension of the lattice vector quantizer and order of the noise shaping filter approach infinity, the entropy rate of the dithered Delta-Sigma quantization scheme approaches the symmetric two-channel MD rate-distortion function for a memoryless Gaussian source and MSE fidelity criterion, at any side-to-central distortion ratio and any resolution. In the optimal scheme, the infinite-order noise shaping filter must be minimum phase and have a piece-wise flat power spectrum with a single jump discontinuity. An important advantage of the proposed design is that it is symmetric in rate and distortion by construction, so the coding rates of the descriptions are identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has been fixed. Accepted for publication in the IEEE Transactions on Information Theor

    Zero-Delay Multiple Descriptions of Stationary Scalar Gauss-Markov Sources

    Get PDF
    In this paper, we introduce the zero-delay multiple-description problem, where an encoder constructs two descriptions and the decoders receive a subset of these descriptions. The encoder and decoders are causal and operate under the restriction of zero delay, which implies that at each time instance, the encoder must generate codewords that can be decoded by the decoders using only the current and past codewords. For the case of discrete-time stationary scalar Gauss—Markov sources and quadratic distortion constraints, we present information-theoretic lower bounds on the average sum-rate in terms of the directed and mutual information rate between the source and the decoder reproductions. Furthermore, we show that the optimum test channel is in this case Gaussian, and it can be realized by a feedback coding scheme that utilizes prediction and correlated Gaussian noises. Operational achievable results are considered in the high-rate scenario using a simple differential pulse code modulation scheme with staggered quantizers. Using this scheme, we achieve operational rates within 0.415 bits / sample / description of the theoretical lower bounds for varying description rates

    n-Channel Asymmetric Entropy-Constrained Multiple-Description Lattice Vector Quantization

    Get PDF
    This paper is about the design and analysis of an index-assignment (IA) based multiple-description coding scheme for the n-channel asymmetric case. We use entropy constrained lattice vector quantization and restrict attention to simple reconstruction functions, which are given by the inverse IA function when all descriptions are received or otherwise by a weighted average of the received descriptions. We consider smooth sources with finite differential entropy rate and MSE fidelity criterion. As in previous designs, our construction is based on nested lattices which are combined through a single IA function. The results are exact under high-resolution conditions and asymptotically as the nesting ratios of the lattices approach infinity. For any n, the design is asymptotically optimal within the class of IA-based schemes. Moreover, in the case of two descriptions and finite lattice vector dimensions greater than one, the performance is strictly better than that of existing designs. In the case of three descriptions, we show that in the limit of large lattice vector dimensions, points on the inner bound of Pradhan et al. can be achieved. Furthermore, for three descriptions and finite lattice vector dimensions, we show that the IA-based approach yields, in the symmetric case, a smaller rate loss than the recently proposed source-splitting approach.Comment: 49 pages, 4 figures. Accepted for publication in IEEE Transactions on Information Theory, 201

    Incremental Refinements and Multiple Descriptions with Feedback

    Get PDF
    It is well known that independent (separate) encoding of K correlated sources may incur some rate loss compared to joint encoding, even if the decoding is done jointly. This loss is particularly evident in the multiple descriptions problem, where the sources are repetitions of the same source, but each description must be individually good. We observe that under mild conditions about the source and distortion measure, the rate ratio Rindependent(K)/Rjoint goes to one in the limit of small rate/high distortion. Moreover, we consider the excess rate with respect to the rate-distortion function, Rindependent(K, M) - R(D), in M rounds of K independent encodings with a final distortion level D. We provide two examples - a Gaussian source with mean-squared error and an exponential source with one-sided error - for which the excess rate vanishes in the limit as the number of rounds M goes to infinity, for any fixed D and K. This result has an interesting interpretation for a multi-round variant of the multiple descriptions problem, where after each round the encoder gets a (block) feedback regarding which of the descriptions arrived: In the limit as the number of rounds M goes to infinity (i.e., many incremental rounds), the total rate of received descriptions approaches the rate-distortion function. We provide theoretical and experimental evidence showing that this phenomenon is in fact more general than in the two examples above.Comment: 62 pages. Accepted in the IEEE Transactions on Information Theor

    Multiple Description Coding: Proposed Methods And Video Application

    No full text
    Multiple description (MD) coding and quantization has received a lot of attention recently, and has been studied extensively and extended to many demanding appli-cations such as speech and video. We propose two multiple description quantization schemes in order to design the codebooks and partitions of side and central quantiz-ers. The applied framework originates in the multiple description quantization via Gram-Schmidt orthogonalization approach which provides systematic treatment of the achievable rate-distortion region by subtractive dithering and successive quanti-zation along with quantization splitting. The basic idea of our proposed MD quanti-zation schemes is to minimize a Lagrangian cost function (defined as the weighted sum of the central and side distortions) by an iterative technique which jointly designs side codebooks and consequently forms associated partitions. In the first scheme, multiple description vector quantization with weighted sum central decoder (MDVQ-WSC), the central decoder is formed by a linear combination (weighted sum) of the side codebooks. The parameters of this linear combination are also found to minimiz
    corecore