1,866 research outputs found

    Optimal Threshold-Based Multi-Trial Error/Erasure Decoding with the Guruswami-Sudan Algorithm

    Full text link
    Traditionally, multi-trial error/erasure decoding of Reed-Solomon (RS) codes is based on Bounded Minimum Distance (BMD) decoders with an erasure option. Such decoders have error/erasure tradeoff factor L=2, which means that an error is twice as expensive as an erasure in terms of the code's minimum distance. The Guruswami-Sudan (GS) list decoder can be considered as state of the art in algebraic decoding of RS codes. Besides an erasure option, it allows to adjust L to values in the range 1<L<=2. Based on previous work, we provide formulae which allow to optimally (in terms of residual codeword error probability) exploit the erasure option of decoders with arbitrary L, if the decoder can be used z>=1 times. We show that BMD decoders with z_BMD decoding trials can result in lower residual codeword error probability than GS decoders with z_GS trials, if z_BMD is only slightly larger than z_GS. This is of practical interest since BMD decoders generally have lower computational complexity than GS decoders.Comment: Accepted for the 2011 IEEE International Symposium on Information Theory, St. Petersburg, Russia, July 31 - August 05, 2011. 5 pages, 2 figure

    On Multiple Decoding Attempts for Reed-Solomon Codes: A Rate-Distortion Approach

    Full text link
    One popular approach to soft-decision decoding of Reed-Solomon (RS) codes is based on using multiple trials of a simple RS decoding algorithm in combination with erasing or flipping a set of symbols or bits in each trial. This paper presents a framework based on rate-distortion (RD) theory to analyze these multiple-decoding algorithms. By defining an appropriate distortion measure between an error pattern and an erasure pattern, the successful decoding condition, for a single errors-and-erasures decoding trial, becomes equivalent to distortion being less than a fixed threshold. Finding the best set of erasure patterns also turns into a covering problem which can be solved asymptotically by rate-distortion theory. Thus, the proposed approach can be used to understand the asymptotic performance-versus-complexity trade-off of multiple errors-and-erasures decoding of RS codes. This initial result is also extended a few directions. The rate-distortion exponent (RDE) is computed to give more precise results for moderate blocklengths. Multiple trials of algebraic soft-decision (ASD) decoding are analyzed using this framework. Analytical and numerical computations of the RD and RDE functions are also presented. Finally, simulation results show that sets of erasure patterns designed using the proposed methods outperform other algorithms with the same number of decoding trials.Comment: to appear in the IEEE Transactions on Information Theory (Special Issue on Facets of Coding Theory: from Algorithms to Networks

    Optimal Thresholds for GMD Decoding with (L+1)/L-extended Bounded Distance Decoders

    Full text link
    We investigate threshold-based multi-trial decoding of concatenated codes with an inner Maximum-Likelihood decoder and an outer error/erasure (L+1)/L-extended Bounded Distance decoder, i.e. a decoder which corrects e errors and t erasures if e(L+1)/L + t <= d - 1, where d is the minimum distance of the outer code and L is a positive integer. This is a generalization of Forney's GMD decoding, which was considered only for L = 1, i.e. outer Bounded Minimum Distance decoding. One important example for (L+1)/L-extended Bounded Distance decoders is decoding of L-Interleaved Reed-Solomon codes. Our main contribution is a threshold location formula, which allows to optimally erase unreliable inner decoding results, for a given number of decoding trials and parameter L. Thereby, the term optimal means that the residual codeword error probability of the concatenated code is minimized. We give an estimation of this probability for any number of decoding trials.Comment: Accepted for the 2010 IEEE International Symposium on Information Theory, Austin, TX, USA, June 13 - 18, 2010. 5 pages, 2 figure

    A Rate-Distortion Exponent Approach to Multiple Decoding Attempts for Reed-Solomon Codes

    Full text link
    Algorithms based on multiple decoding attempts of Reed-Solomon (RS) codes have recently attracted new attention. Choosing decoding candidates based on rate-distortion (R-D) theory, as proposed previously by the authors, currently provides the best performance-versus-complexity trade-off. In this paper, an analysis based on the rate-distortion exponent (RDE) is used to directly minimize the exponential decay rate of the error probability. This enables rigorous bounds on the error probability for finite-length RS codes and leads to modest performance gains. As a byproduct, a numerical method is derived that computes the rate-distortion exponent for independent non-identical sources. Analytical results are given for errors/erasures decoding.Comment: accepted for presentation at 2010 IEEE International Symposium on Information Theory (ISIT 2010), Austin TX, US

    A New Chase-type Soft-decision Decoding Algorithm for Reed-Solomon Codes

    Full text link
    This paper addresses three relevant issues arising in designing Chase-type algorithms for Reed-Solomon codes: 1) how to choose the set of testing patterns; 2) given the set of testing patterns, what is the optimal testing order in the sense that the most-likely codeword is expected to appear earlier; and 3) how to identify the most-likely codeword. A new Chase-type soft-decision decoding algorithm is proposed, referred to as tree-based Chase-type algorithm. The proposed algorithm takes the set of all vectors as the set of testing patterns, and hence definitely delivers the most-likely codeword provided that the computational resources are allowed. All the testing patterns are arranged in an ordered rooted tree according to the likelihood bounds of the possibly generated codewords. While performing the algorithm, the ordered rooted tree is constructed progressively by adding at most two leafs at each trial. The ordered tree naturally induces a sufficient condition for the most-likely codeword. That is, whenever the proposed algorithm exits before a preset maximum number of trials is reached, the output codeword must be the most-likely one. When the proposed algorithm is combined with Guruswami-Sudan (GS) algorithm, each trial can be implement in an extremely simple way by removing one old point and interpolating one new point. Simulation results show that the proposed algorithm performs better than the recently proposed Chase-type algorithm by Bellorado et al with less trials given that the maximum number of trials is the same. Also proposed are simulation-based performance bounds on the MLD algorithm, which are utilized to illustrate the near-optimality of the proposed algorithm in the high SNR region. In addition, the proposed algorithm admits decoding with a likelihood threshold, that searches the most-likely codeword within an Euclidean sphere rather than a Hamming sphere

    Communication Efficient Secret Sharing

    Get PDF
    A secret sharing scheme is a method to store information securely and reliably. Particularly, in a threshold secret sharing scheme, a secret is encoded into nn shares, such that any set of at least t1t_1 shares suffice to decode the secret, and any set of at most t2<t1t_2 < t_1 shares reveal no information about the secret. Assuming that each party holds a share and a user wishes to decode the secret by receiving information from a set of parties; the question we study is how to minimize the amount of communication between the user and the parties. We show that the necessary amount of communication, termed "decoding bandwidth", decreases as the number of parties that participate in decoding increases. We prove a tight lower bound on the decoding bandwidth, and construct secret sharing schemes achieving the bound. Particularly, we design a scheme that achieves the optimal decoding bandwidth when dd parties participate in decoding, universally for all t1≤d≤nt_1 \le d \le n. The scheme is based on Shamir's secret sharing scheme and preserves its simplicity and efficiency. In addition, we consider secure distributed storage where the proposed communication efficient secret sharing schemes further improve disk access complexity during decoding.Comment: submitted to the IEEE Transactions on Information Theory. New references and a new construction adde

    Optimal Iris Fuzzy Sketches

    Full text link
    Fuzzy sketches, introduced as a link between biometry and cryptography, are a way of handling biometric data matching as an error correction issue. We focus here on iris biometrics and look for the best error-correcting code in that respect. We show that two-dimensional iterative min-sum decoding leads to results near the theoretical limits. In particular, we experiment our techniques on the Iris Challenge Evaluation (ICE) database and validate our findings.Comment: 9 pages. Submitted to the IEEE Conference on Biometrics: Theory, Applications and Systems, 2007 Washington D
    • …
    corecore