5,478 research outputs found

    Capacity-achieving ensembles for the binary erasure channel with bounded complexity

    Full text link
    We present two sequences of ensembles of non-systematic irregular repeat-accumulate codes which asymptotically (as their block length tends to infinity) achieve capacity on the binary erasure channel (BEC) with bounded complexity per information bit. This is in contrast to all previous constructions of capacity-achieving sequences of ensembles whose complexity grows at least like the log of the inverse of the gap (in rate) to capacity. The new bounded complexity result is achieved by puncturing bits, and allowing in this way a sufficient number of state nodes in the Tanner graph representing the codes. We also derive an information-theoretic lower bound on the decoding complexity of randomly punctured codes on graphs. The bound holds for every memoryless binary-input output-symmetric channel and is refined for the BEC.Comment: 47 pages, 9 figures. Submitted to IEEE Transactions on Information Theor

    HITECH Revisited

    Get PDF
    Assesses the 2009 Health Information Technology for Economic and Clinical Health Act, which offers incentives to adopt and meaningfully use electronic health records. Recommendations include revised criteria, incremental approaches, and targeted policies

    A Rate-Distortion Exponent Approach to Multiple Decoding Attempts for Reed-Solomon Codes

    Full text link
    Algorithms based on multiple decoding attempts of Reed-Solomon (RS) codes have recently attracted new attention. Choosing decoding candidates based on rate-distortion (R-D) theory, as proposed previously by the authors, currently provides the best performance-versus-complexity trade-off. In this paper, an analysis based on the rate-distortion exponent (RDE) is used to directly minimize the exponential decay rate of the error probability. This enables rigorous bounds on the error probability for finite-length RS codes and leads to modest performance gains. As a byproduct, a numerical method is derived that computes the rate-distortion exponent for independent non-identical sources. Analytical results are given for errors/erasures decoding.Comment: accepted for presentation at 2010 IEEE International Symposium on Information Theory (ISIT 2010), Austin TX, US

    On Multiple Decoding Attempts for Reed-Solomon Codes: A Rate-Distortion Approach

    Full text link
    One popular approach to soft-decision decoding of Reed-Solomon (RS) codes is based on using multiple trials of a simple RS decoding algorithm in combination with erasing or flipping a set of symbols or bits in each trial. This paper presents a framework based on rate-distortion (RD) theory to analyze these multiple-decoding algorithms. By defining an appropriate distortion measure between an error pattern and an erasure pattern, the successful decoding condition, for a single errors-and-erasures decoding trial, becomes equivalent to distortion being less than a fixed threshold. Finding the best set of erasure patterns also turns into a covering problem which can be solved asymptotically by rate-distortion theory. Thus, the proposed approach can be used to understand the asymptotic performance-versus-complexity trade-off of multiple errors-and-erasures decoding of RS codes. This initial result is also extended a few directions. The rate-distortion exponent (RDE) is computed to give more precise results for moderate blocklengths. Multiple trials of algebraic soft-decision (ASD) decoding are analyzed using this framework. Analytical and numerical computations of the RD and RDE functions are also presented. Finally, simulation results show that sets of erasure patterns designed using the proposed methods outperform other algorithms with the same number of decoding trials.Comment: to appear in the IEEE Transactions on Information Theory (Special Issue on Facets of Coding Theory: from Algorithms to Networks

    Approaching Capacity at High-Rates with Iterative Hard-Decision Decoding

    Full text link
    A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. In this paper, we show that one can approach capacity at high rates using iterative hard-decision decoding (HDD) of generalized product codes. Specifically, a class of spatially-coupled GLDPC codes with BCH component codes is considered, and it is observed that, in the high-rate regime, they can approach capacity under the proposed iterative HDD. These codes can be seen as generalized product codes and are closely related to braided block codes. An iterative HDD algorithm is proposed that enables one to analyze the performance of these codes via density evolution (DE).Comment: 22 pages, this version accepted to the IEEE Transactions on Information Theor

    Bright tripartite entanglement in triply concurrent parametric oscillation

    Get PDF
    We show that a novel optical parametric oscillator, based on concurrent χ(2)\chi^{(2)} nonlinearities, can produce, above threshold, bright output beams of macroscopic intensities which exhibit strong tripartite continuous-variable entanglement. We also show that there are {\em two} ways that the system can exhibit a new three-mode form of the Einstein-Podolsky-Rosen paradox, and calculate the extra-cavity fluctuation spectra that may be measured to verify our predictions.Comment: title change, expanded intro and discussion of experimental aspects, 1 new figure. Conclusions unaltere

    On the Queueing Behavior of Random Codes over a Gilbert-Elliot Erasure Channel

    Full text link
    This paper considers the queueing performance of a system that transmits coded data over a time-varying erasure channel. In our model, the queue length and channel state together form a Markov chain that depends on the system parameters. This gives a framework that allows a rigorous analysis of the queue as a function of the code rate. Most prior work in this area either ignores block-length (e.g., fluid models) or assumes error-free communication using finite codes. This work enables one to determine when such assumptions provide good, or bad, approximations of true behavior. Moreover, it offers a new approach to optimize parameters and evaluate performance. This can be valuable for delay-sensitive systems that employ short block lengths.Comment: 5 pages, 4 figures, conferenc
    corecore