10,255 research outputs found

    Colored-Gaussian Multiple Descriptions: Spectral and Time-Domain Forms

    Get PDF
    It is well known that Shannon's rate-distortion function (RDF) in the colored quadratic Gaussian (QG) case can be parametrized via a single Lagrangian variable (the "water level" in the reverse water filling solution). In this work, we show that the symmetric colored QG multiple-description (MD) RDF in the case of two descriptions can be parametrized in the spectral domain via two Lagrangian variables, which control the trade-off between the side distortion, the central distortion, and the coding rate. This spectral-domain analysis is complemented by a time-domain scheme-design approach: we show that the symmetric colored QG MD RDF can be achieved by combining ideas of delta-sigma modulation and differential pulse-code modulation. Specifically, two source prediction loops, one for each description, are embedded within a common noise shaping loop, whose parameters are explicitly found from the spectral-domain characterization.Comment: Accepted for publications in the IEEE Transactions on Information Theory. Title have been shortened, abstract clarified, and paper significantly restructure

    Erasure Multiple Descriptions

    Full text link
    We consider a binary erasure version of the n-channel multiple descriptions problem with symmetric descriptions, i.e., the rates of the n descriptions are the same and the distortion constraint depends only on the number of messages received. We consider the case where there is no excess rate for every k out of n descriptions. Our goal is to characterize the achievable distortions D_1, D_2,...,D_n. We measure the fidelity of reconstruction using two distortion criteria: an average-case distortion criterion, under which distortion is measured by taking the average of the per-letter distortion over all source sequences, and a worst-case distortion criterion, under which distortion is measured by taking the maximum of the per-letter distortion over all source sequences. We present achievability schemes, based on random binning for average-case distortion and systematic MDS (maximum distance separable) codes for worst-case distortion, and prove optimality results for the corresponding achievable distortion regions. We then use the binary erasure multiple descriptions setup to propose a layered coding framework for multiple descriptions, which we then apply to vector Gaussian multiple descriptions and prove its optimality for symmetric scalar Gaussian multiple descriptions with two levels of receivers and no excess rate for the central receiver. We also prove a new outer bound for the general multi-terminal source coding problem and use it to prove an optimality result for the robust binary erasure CEO problem. For the latter, we provide a tight lower bound on the distortion for \ell messages for any coding scheme that achieves the minimum achievable distortion for k messages where k is less than or equal to \ell.Comment: 48 pages, 2 figures, submitted to IEEE Trans. Inf. Theor

    Analog Multiple Descriptions: A Zero-Delay Source-Channel Coding Approach

    Full text link
    This paper extends the well-known source coding problem of multiple descriptions, in its general and basic setting, to analog source-channel coding scenarios. Encoding-decoding functions that optimally map between the (possibly continuous valued) source and the channel spaces are numerically derived. The main technical tool is a non-convex optimization method, namely, deterministic annealing, which has recently been successfully used in other mapping optimization problems. The obtained functions exhibit several interesting structural properties, map multiple source intervals to the same interval in the channel space, and consistently outperform the known competing mapping techniques.Comment: Submitted to ICASSP 201

    On the rate loss of multiple description source codes

    Get PDF
    The rate loss of a multiresolution source code (MRSC) describes the difference between the rate needed to achieve distortion D/sub i/ in resolution i and the rate-distortion function R(D/sub i/). This paper generalizes the rate loss definition to multiple description source codes (MDSCs) and bounds the MDSC rate loss for arbitrary memoryless sources. For a two-description MDSC (2DSC), the rate loss of description i with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), i=1,2, where R/sub i/ is the rate of the ith description; the joint rate loss associated with decoding the two descriptions together to achieve central distortion D/sub 0/ is measured either as L/sub 0/=R/sub 1/+R/sub 2/-R(D/sub 0/) or as L/sub 12/=L/sub 1/+L/sub 2/. We show that for any memoryless source with variance /spl sigma//sup 2/, there exists a 2DSC for that source with L/sub 1//spl les/1/2 or L/sub 2//spl les/1/2 and a) L/sub 0//spl les/1 if D/sub 0//spl les/D/sub 1/+D/sub 2/-/spl sigma//sup 2/, b) L/sub 12//spl les/1 if 1/D/sub 0//spl les/1/D/sub 1/+1/D/sub 2/-1//spl sigma//sup 2/, c) L/sub 0//spl les/L/sub G0/+1.5 and L/sub 12//spl les/L/sub G12/+1 otherwise, where L/sub G0/ and L/sub G12/ are the joint rate losses of a Gaussian source with variance /spl sigma//sup 2/

    Source Coding in Networks with Covariance Distortion Constraints

    Get PDF
    We consider a source coding problem with a network scenario in mind, and formulate it as a remote vector Gaussian Wyner-Ziv problem under covariance matrix distortions. We define a notion of minimum for two positive-definite matrices based on which we derive an explicit formula for the rate-distortion function (RDF). We then study the special cases and applications of this result. We show that two well-studied source coding problems, i.e. remote vector Gaussian Wyner-Ziv problems with mean-squared error and mutual information constraints are in fact special cases of our results. Finally, we apply our results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design the distortion matrices at the nodes in order to maximize the output SNR at the fusion center. We thereby bridge between denoising and source coding within this setup
    corecore