19 research outputs found

    Faulty Successive Cancellation Decoding of Polar Codes for the Binary Erasure Channel

    Full text link
    We study faulty successive cancellation decoding of polar codes for the binary erasure channel. To this end, we introduce a simple erasure-based fault model and we show that, under this model, polarization does not happen, meaning that fully reliable communication is not possible at any rate. Moreover, we provide numerical results for the frame erasure rate and bit erasure rate and we study an unequal error protection scheme that can significantly improve the performance of the faulty successive cancellation decoder with negligible overhead.Comment: As presented at ISITA 201

    Coding with Encoding Uncertainty

    Full text link
    We study the channel coding problem when errors and uncertainty occur in the encoding process. For simplicity we assume the channel between the encoder and the decoder is perfect. Focusing on linear block codes, we model the encoding uncertainty as erasures on the edges in the factor graph of the encoder generator matrix. We first take a worst-case approach and find the maximum tolerable number of erasures for perfect error correction. Next, we take a probabilistic approach and derive a sufficient condition on the rate of a set of codes, such that decoding error probability vanishes as blocklength tends to infinity. In both scenarios, due to the inherent asymmetry of the problem, we derive the results from first principles, which indicates that robustness to encoding errors requires new properties of codes different from classical properties.Comment: 12 pages; a shorter version of this work will appear in the proceedings of ISIT 201

    Faulty Successive Cancellation Decoding of Polar Codes for the Binary Erasure Channel

    Full text link
    In this paper, faulty successive cancellation decoding of polar codes for the binary erasure channel is studied. To this end, a simple erasure-based fault model is introduced to represent errors in the decoder and it is shown that, under this model, polarization does not happen, meaning that fully reliable communication is not possible at any rate. Furthermore, a lower bound on the frame error rate of polar codes under faulty SC decoding is provided, which is then used, along with a well-known upper bound, in order to choose a blocklength that minimizes the erasure probability under faulty decoding. Finally, an unequal error protection scheme that can re-enable asymptotically erasure-free transmission at a small rate loss and by protecting only a constant fraction of the decoder is proposed. The same scheme is also shown to significantly improve the finite-length performance of the faulty successive cancellation decoder by protecting as little as 1.5% of the decoder.Comment: Accepted for publications in the IEEE Transactions on Communication

    Analysis and Design of Finite Alphabet Iterative Decoders Robust to Faulty Hardware

    Full text link
    This paper addresses the problem of designing LDPC decoders robust to transient errors introduced by a faulty hardware. We assume that the faulty hardware introduces errors during the message passing updates and we propose a general framework for the definition of the message update faulty functions. Within this framework, we define symmetry conditions for the faulty functions, and derive two simple error models used in the analysis. With this analysis, we propose a new interpretation of the functional Density Evolution threshold previously introduced, and show its limitations in case of highly unreliable hardware. However, we show that under restricted decoder noise conditions, the functional threshold can be used to predict the convergence behavior of FAIDs under faulty hardware. In particular, we reveal the existence of robust and non-robust FAIDs and propose a framework for the design of robust decoders. We finally illustrate robust and non-robust decoders behaviors of finite length codes using Monte Carlo simulations.Comment: 30 pages, submitted to IEEE Transactions on Communication

    Density Evolution and Functional Threshold for the Noisy Min-Sum Decoder

    Full text link
    This paper investigates the behavior of the Min-Sum decoder running on noisy devices. The aim is to evaluate the robustness of the decoder in the presence of computation noise, e.g. due to faulty logic in the processing units, which represents a new source of errors that may occur during the decoding process. To this end, we first introduce probabilistic models for the arithmetic and logic units of the the finite-precision Min-Sum decoder, and then carry out the density evolution analysis of the noisy Min-Sum decoder. We show that in some particular cases, the noise introduced by the device can help the Min-Sum decoder to escape from fixed points attractors, and may actually result in an increased correction capacity with respect to the noiseless decoder. We also reveal the existence of a specific threshold phenomenon, referred to as functional threshold. The behavior of the noisy decoder is demonstrated in the asymptotic limit of the code-length -- by using "noisy" density evolution equations -- and it is also verified in the finite-length case by Monte-Carlo simulation.Comment: 46 pages (draft version); extended version of the paper with same title, submitted to IEEE Transactions on Communication

    Noise facilitation in associative memories of exponential capacity

    Get PDF
    Recent advances in associative memory design through structured pattern sets and graph-based inference al- gorithms have allowed reliable learning and recall of an exponential number of patterns. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively such as hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprisingly, we show that internal noise actually improves the performance of the recall phase while the pattern retrieval capacity remains intact, i.e., the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks
    corecore