762 research outputs found

    Bounds on performance of optimum quantizers.

    Get PDF
    Reprinted from IEEE transactions on information theory, vol. IT-16, no.2, March 1970.Bibliography: p. 184

    Constructing practical Fuzzy Extractors using QIM

    Get PDF
    Fuzzy extractors are a powerful tool to extract randomness from noisy data. A fuzzy extractor can extract randomness only if the source data is discrete while in practice source data is continuous. Using quantizers to transform continuous data into discrete data is a commonly used solution. However, as far as we know no study has been made of the effect of the quantization strategy on the performance of fuzzy extractors. We construct the encoding and the decoding function of a fuzzy extractor using quantization index modulation (QIM) and we express properties of this fuzzy extractor in terms of parameters of the used QIM. We present and analyze an optimal (in the sense of embedding rate) two dimensional construction. Our 6-hexagonal tiling construction offers ( log2 6 / 2-1) approx. 3 extra bits per dimension of the space compared to the known square quantization based fuzzy extractor

    Results on optimal biorthogonal filter banks

    Get PDF
    Optimization of filter banks for specific input statistics has been of interest in the theory and practice of subband coding. For the case of orthonormal filter banks with infinite order and uniform decimation, the problem has been completely solved in recent years. For the case of biorthogonal filter banks, significant progress has been made recently, although a number of issues still remain to be addressed. In this paper we briefly review the orthonormal case, and then present several new results for the biorthogonal case. All discussions pertain to the infinite order (ideal filter) case. The current status of research as well as some of the unsolved problems are described

    On the rate loss and construction of source codes for broadcast channels

    Get PDF
    In this paper, we first define and bound the rate loss of source codes for broadcast channels. Our broadcast channel model comprises one transmitter and two receivers; the transmitter is connected to each receiver by a private channel and to both receivers by a common channel. The transmitter sends a description of source (X, Y) through these channels, receiver 1 reconstructs X with distortion D1, and receiver 2 reconstructs Y with distortion D2. Suppose the rates of the common channel and private channels 1 and 2 are R0, R1, and R2, respectively. The work of Gray and Wyner gives a complete characterization of all achievable rate triples (R0,R1,R2) given any distortion pair (D1,D2). In this paper, we define the rate loss as the gap between the achievable region and the outer bound composed by the rate-distortion functions, i.e., R0+R1+R2 ≥ RX,Y (D1,D2), R0 + R1 ≥ RX(D1), and R0 + R2 ≥ RY (D2). We upper bound the rate loss for general sources by functions of distortions and upper bound the rate loss for Gaussian sources by constants, which implies that though the outer bound is generally not achievable, it may be quite close to the achievable region. This also bounds the gap between the achievable region and the inner bound proposed by Gray and Wyner and bounds the performance penalty associated with using separate decoders rather than joint decoders. We then construct such source codes using entropy-constrained dithered quantizers. The resulting implementation has low complexity and performance close to the theoretical optimum. In particular, the gap between its performance and the theoretical optimum can be bounded from above by constants for Gaussian sources

    Multiple-Description Coding by Dithered Delta-Sigma Quantization

    Get PDF
    We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and time-invariant MD coding scheme. We show that the use of a noise shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically as the dimension of the lattice vector quantizer and order of the noise shaping filter approach infinity, the entropy rate of the dithered Delta-Sigma quantization scheme approaches the symmetric two-channel MD rate-distortion function for a memoryless Gaussian source and MSE fidelity criterion, at any side-to-central distortion ratio and any resolution. In the optimal scheme, the infinite-order noise shaping filter must be minimum phase and have a piece-wise flat power spectrum with a single jump discontinuity. An important advantage of the proposed design is that it is symmetric in rate and distortion by construction, so the coding rates of the descriptions are identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has been fixed. Accepted for publication in the IEEE Transactions on Information Theor

    Quantization Design for Distributed Optimization

    Get PDF
    We consider the problem of solving a distributed optimization problem using a distributed computing platform, where the communication in the network is limited: each node can only communicate with its neighbours and the channel has a limited data-rate. A common technique to address the latter limitation is to apply quantization to the exchanged information. We propose two distributed optimization algorithms with an iteratively refining quantization design based on the inexact proximal gradient method and its accelerated variant. We show that if the parameters of the quantizers, i.e. the number of bits and the initial quantization intervals, satisfy certain conditions, then the quantization error is bounded by a linearly decreasing function and the convergence of the distributed algorithms is guaranteed. Furthermore, we prove that after imposing the quantization scheme, the distributed algorithms still exhibit a linear convergence rate, and show complexity upper-bounds on the number of iterations to achieve a given accuracy. Finally, we demonstrate the performance of the proposed algorithms and the theoretical findings for solving a distributed optimal control problem

    Multiresolution source coding using entropy constrained dithered scalar quantization

    Get PDF
    In this paper, we build multiresolution source codes using entropy constrained dithered scalar quantizers. We demonstrate that for n-dimensional random vectors, dithering followed by uniform scalar quantization and then by entropy coding achieves performance close to the n-dimensional optimum for a multiresolution source code. Based on this result, we propose a practical code design algorithm and compare its performance with that of the set partitioning in hierarchical trees (SPIHT) algorithm on natural images
    • …
    corecore