47,840 research outputs found

    A vector quantization approach to universal noiseless coding and quantization

    Get PDF
    A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions

    LDPC coded transmissions over the Gaussian broadcast channel with confidential messages

    Full text link
    We design and assess some practical low-density parity-check (LDPC) coded transmission schemes for the Gaussian broadcast channel with confidential messages (BCC). This channel model is different from the classical wiretap channel model as the unauthorized receiver (Eve) must be able to decode some part of the information. Hence, the reliability and security targets are different from those of the wiretap channel. In order to design and assess practical coding schemes, we use the error rate as a metric of the performance achieved by the authorized receiver (Bob) and the unauthorized receiver (Eve). We study the system feasibility, and show that two different levels of protection against noise are required on the public and the secret messages. This can be achieved in two ways: i) by using LDPC codes with unequal error protection (UEP) of the transmitted information bits or ii) by using two classical non-UEP LDPC codes with different rates. We compare these two approaches and show that, for the considered examples, the solution exploiting UEP LDPC codes is more efficient than that using non-UEP LDPC codes.Comment: 5 pages, 5 figures, to be presented at IEEE ICT 201

    On a Low-Rate TLDPC Code Ensemble and the Necessary Condition on the Linear Minimum Distance for Sparse-Graph Codes

    Full text link
    This paper addresses the issue of design of low-rate sparse-graph codes with linear minimum distance in the blocklength. First, we define a necessary condition which needs to be satisfied when the linear minimum distance is to be ensured. The condition is formulated in terms of degree-1 and degree-2 variable nodes and of low-weight codewords of the underlying code, and it generalizies results known for turbo codes [8] and LDPC codes. Then, we present a new ensemble of low-rate codes, which itself is a subclass of TLDPC codes [4], [5], and which is designed under this necessary condition. The asymptotic analysis of the ensemble shows that its iterative threshold is situated close to the Shannon limit. In addition to the linear minimum distance property, it has a simple structure and enjoys a low decoding complexity and a fast convergence.Comment: submitted to IEEE Trans. on Communication

    Block-Fading Channels with Delayed CSIT at Finite Blocklength

    Get PDF
    In many wireless systems, the channel state information at the transmitter (CSIT) can not be learned until after a transmission has taken place and is thereby outdated. In this paper, we study the benefits of delayed CSIT on a block-fading channel at finite blocklength. First, the achievable rates of a family of codes that allows the number of codewords to expand during transmission, based on delayed CSIT, are characterized. A fixed-length and a variable-length characterization of the rates are provided using the dependency testing bound and the variable-length setting introduced by Polyanskiy et al. Next, a communication protocol based on codes with expandable message space is put forth, and numerically, it is shown that higher rates are achievable compared to coding strategies that do not benefit from delayed CSIT.Comment: Extended version of a paper submitted to ISIT'1

    TOFEC: Achieving Optimal Throughput-Delay Trade-off of Cloud Storage Using Erasure Codes

    Full text link
    Our paper presents solutions using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller segments) together to significantly improve the delay performance of uploading and downloading data in and out of cloud storage. TOFEC is a strategy that helps front-end proxy adapt to level of workload by treating scalable cloud storage (e.g. Amazon S3) as a shared resource requiring admission control. Under light workloads, TOFEC creates more smaller chunks and uses more parallel connections per file, minimizing service delay. Under heavy workloads, TOFEC automatically reduces the level of chunking (fewer chunks with increased size) and uses fewer parallel connections to reduce overhead, resulting in higher throughput and preventing queueing delay. Our trace-driven simulation results show that TOFEC's adaptation mechanism converges to an appropriate code that provides the optimal delay-throughput trade-off without reducing system capacity. Compared to a non-adaptive strategy optimized for throughput, TOFEC delivers 2.5x lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, TOFEC can scale to support over 3x as many requests
    corecore