10 research outputs found

    Sampling versus Random Binning for Multiple Descriptions of a Bandlimited Source

    Get PDF
    Random binning is an efficient, yet complex, coding technique for the symmetric L-description source coding problem. We propose an alternative approach, that uses the quantized samples of a bandlimited source as "descriptions". By the Nyquist condition, the source can be reconstructed if enough samples are received. We examine a coding scheme that combines sampling and noise-shaped quantization for a scenario in which only K < L descriptions or all L descriptions are received. Some of the received K-sets of descriptions correspond to uniform sampling while others to non-uniform sampling. This scheme achieves the optimum rate-distortion performance for uniform-sampling K-sets, but suffers noise amplification for nonuniform-sampling K-sets. We then show that by increasing the sampling rate and adding a random-binning stage, the optimal operation point is achieved for any K-set.Comment: Presented at the ITW'13. 5 pages, two-column mode, 3 figure

    Erasure Multiple Descriptions

    Full text link
    We consider a binary erasure version of the n-channel multiple descriptions problem with symmetric descriptions, i.e., the rates of the n descriptions are the same and the distortion constraint depends only on the number of messages received. We consider the case where there is no excess rate for every k out of n descriptions. Our goal is to characterize the achievable distortions D_1, D_2,...,D_n. We measure the fidelity of reconstruction using two distortion criteria: an average-case distortion criterion, under which distortion is measured by taking the average of the per-letter distortion over all source sequences, and a worst-case distortion criterion, under which distortion is measured by taking the maximum of the per-letter distortion over all source sequences. We present achievability schemes, based on random binning for average-case distortion and systematic MDS (maximum distance separable) codes for worst-case distortion, and prove optimality results for the corresponding achievable distortion regions. We then use the binary erasure multiple descriptions setup to propose a layered coding framework for multiple descriptions, which we then apply to vector Gaussian multiple descriptions and prove its optimality for symmetric scalar Gaussian multiple descriptions with two levels of receivers and no excess rate for the central receiver. We also prove a new outer bound for the general multi-terminal source coding problem and use it to prove an optimality result for the robust binary erasure CEO problem. For the latter, we provide a tight lower bound on the distortion for \ell messages for any coding scheme that achieves the minimum achievable distortion for k messages where k is less than or equal to \ell.Comment: 48 pages, 2 figures, submitted to IEEE Trans. Inf. Theor

    n-Channel Asymmetric Entropy-Constrained Multiple-Description Lattice Vector Quantization

    Get PDF
    This paper is about the design and analysis of an index-assignment (IA) based multiple-description coding scheme for the n-channel asymmetric case. We use entropy constrained lattice vector quantization and restrict attention to simple reconstruction functions, which are given by the inverse IA function when all descriptions are received or otherwise by a weighted average of the received descriptions. We consider smooth sources with finite differential entropy rate and MSE fidelity criterion. As in previous designs, our construction is based on nested lattices which are combined through a single IA function. The results are exact under high-resolution conditions and asymptotically as the nesting ratios of the lattices approach infinity. For any n, the design is asymptotically optimal within the class of IA-based schemes. Moreover, in the case of two descriptions and finite lattice vector dimensions greater than one, the performance is strictly better than that of existing designs. In the case of three descriptions, we show that in the limit of large lattice vector dimensions, points on the inner bound of Pradhan et al. can be achieved. Furthermore, for three descriptions and finite lattice vector dimensions, we show that the IA-based approach yields, in the symmetric case, a smaller rate loss than the recently proposed source-splitting approach.Comment: 49 pages, 4 figures. Accepted for publication in IEEE Transactions on Information Theory, 201

    Incremental Refinements and Multiple Descriptions with Feedback

    Get PDF
    It is well known that independent (separate) encoding of K correlated sources may incur some rate loss compared to joint encoding, even if the decoding is done jointly. This loss is particularly evident in the multiple descriptions problem, where the sources are repetitions of the same source, but each description must be individually good. We observe that under mild conditions about the source and distortion measure, the rate ratio Rindependent(K)/Rjoint goes to one in the limit of small rate/high distortion. Moreover, we consider the excess rate with respect to the rate-distortion function, Rindependent(K, M) - R(D), in M rounds of K independent encodings with a final distortion level D. We provide two examples - a Gaussian source with mean-squared error and an exponential source with one-sided error - for which the excess rate vanishes in the limit as the number of rounds M goes to infinity, for any fixed D and K. This result has an interesting interpretation for a multi-round variant of the multiple descriptions problem, where after each round the encoder gets a (block) feedback regarding which of the descriptions arrived: In the limit as the number of rounds M goes to infinity (i.e., many incremental rounds), the total rate of received descriptions approaches the rate-distortion function. We provide theoretical and experimental evidence showing that this phenomenon is in fact more general than in the two examples above.Comment: 62 pages. Accepted in the IEEE Transactions on Information Theor

    Distributed Vector Gaussian Source-Coding And Distributed Hypothesis Testing

    Full text link
    Distributed compression is desired in applications in which data is collected in a distributed manner by several sensors and information about the data is sent to a processing center, which uses these information to meet an end goal. In this work, we focus on two such applications: (1) distributed source coding and (2) distributed hypothesis testing. In distributed source coding, we determine the rate region of the vector Gaussian one-helper source-coding problem under a covariance matrix distortion constraint. The rate region is achieved by a simple scheme that separates the lossy vector quantization from the lossless spatial compression. We introduce a novel analysis technique, namely distortion projection. The converse is established by combining distortion projection with two other analysis techniques that have been employed in the past to obtain partial results for the problem. We also study an extension to a special case of the problem in which the primary source is a vector and the helper's observation is a scalar and consider separate distortion constraints on both sources. We provide an outer bound to the rate region of this problem and show that it is partially tight in general and completely tight in some nontrivial cases. In distributed hypothesis testing, we study a problem in which data is compressed distributively and sent to a detector that seeks to decide between two possible distributions for the data. The aim is to characterize all achievable en- coding rates and exponents of the type 2 error probability when the type 1 error probability is at most a fixed value. For related problems in distributed source coding, schemes based on random binning perform well and are often optimal. For distributed hypothesis testing, however, the use of binning is hindered by the fact that the overall error probability may be dominated by errors in the binning process. We show that despite this complication, binning is optimal for a class of problems in which the goal is to "test against conditional independence." We then use this optimality result to give an outer bound for a more general class of instances of the problem. We also extend the "test against independence" result of Ahlswede and Csisz´ r to the vector Gaussian case.
    corecore