345 research outputs found

    New Set of Codes for the Maximum-Likelihood Decoding Problem

    Get PDF
    The maximum-likelihood decoding problem is known to be NP-hard for general linear and Reed-Solomon codes. In this paper, we introduce the notion of A-covered codes, that is, codes that can be decoded through a polynomial time algorithm A whose decoding bound is beyond the covering radius. For these codes, we show that the maximum-likelihood decoding problem is reachable in polynomial time in the code parameters. Focusing on bi- nary BCH codes, we were able to find several examples of A-covered codes, including two codes for which the maximum-likelihood decoding problem can be solved in quasi-quadratic time.Comment: in Yet Another Conference on Cryptography, Porquerolle : France (2010

    Linear-time list recovery of high-rate expander codes

    Full text link
    We show that expander codes, when properly instantiated, are high-rate list recoverable codes with linear-time list recovery algorithms. List recoverable codes have been useful recently in constructing efficiently list-decodable codes, as well as explicit constructions of matrices for compressive sensing and group testing. Previous list recoverable codes with linear-time decoding algorithms have all had rate at most 1/2; in contrast, our codes can have rate 1−ϵ1 - \epsilon for any ϵ>0\epsilon > 0. We can plug our high-rate codes into a construction of Meir (2014) to obtain linear-time list recoverable codes of arbitrary rates, which approach the optimal trade-off between the number of non-trivial lists provided and the rate of the code. While list-recovery is interesting on its own, our primary motivation is applications to list-decoding. A slight strengthening of our result would implies linear-time and optimally list-decodable codes for all rates, and our work is a step in the direction of solving this important problem

    Re-encoding reformulation and application to Welch-Berlekamp algorithm

    Full text link
    The main decoding algorithms for Reed-Solomon codes are based on a bivariate interpolation step, which is expensive in time complexity. Lot of interpolation methods were proposed in order to decrease the complexity of this procedure, but they stay still expensive. Then Koetter, Ma and Vardy proposed in 2010 a technique, called re-encoding, which allows to reduce the practical running time. However, this trick is only devoted for the Koetter interpolation algorithm. We propose a reformulation of the re-encoding for any interpolation methods. The assumption for this reformulation permits only to apply it to the Welch-Berlekamp algorithm

    It'll probably work out: improved list-decoding through random operations

    Full text link
    In this work, we introduce a framework to study the effect of random operations on the combinatorial list-decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural transformations on codes, such as puncturing, folding, and taking subcodes; we show that many such operations can improve the list-decoding properties of a code. There are two main points to this. First, our goal is to advance our (combinatorial) understanding of list-decodability, by understanding what structure (or lack thereof) is necessary to obtain it. Second, we use our more general results to obtain a few interesting corollaries for list decoding: (1) We show the existence of binary codes that are combinatorially list-decodable from 1/2−ϵ1/2-\epsilon fraction of errors with optimal rate Ω(ϵ2)\Omega(\epsilon^2) that can be encoded in linear time. (2) We show that any code with Ω(1)\Omega(1) relative distance, when randomly folded, is combinatorially list-decodable 1−ϵ1-\epsilon fraction of errors with high probability. This formalizes the intuition for why the folding operation has been successful in obtaining codes with optimal list decoding parameters; previously, all arguments used algebraic methods and worked only with specific codes. (3) We show that any code which is list-decodable with suboptimal list sizes has many subcodes which have near-optimal list sizes, while retaining the error correcting capabilities of the original code. This generalizes recent results where subspace evasive sets have been used to reduce list sizes of codes that achieve list decoding capacity

    List-Decoding of Binary Goppa Codes up to the Binary Johnson Bound

    Get PDF
    International audienceWe study the list-decoding problem of alternant codes (which includes obviously that of classical Goppa codes). The major consideration here is to take into account the (small) size of the alphabet. This amounts to comparing the generic Johnson bound to the q-ary Johnson bound. The most favourable case is q = 2, for which the decoding radius is greatly improved. Even though the announced result, which is the list-decoding radius of binary Goppa codes, is new, we acknowledge that it can be made up from separate previous sources, which may be a little bit unknown, and where the binary Goppa codes has apparently not been thought at. Only D. J. Bernstein has treated the case of binary Goppa codes in a preprint. References are given in the introduction. We propose an autonomous and simplified treatment and also a complexity analysis of the studied algorithm, which is quadratic in the blocklength n, when decoding away of the relative maximum decoding radius

    An algorithm for list decoding number field codes

    Get PDF
    We present an algorithm for list decoding codewords of algebraic number field codes in polynomial time. This is the first explicit procedure for decoding number field codes whose construction were previously described by Lenstra [12] and Guruswami [8]. We rely on a new algorithm for computing the Hermite normal form of the basis of an OK -module due to Biasse and Fieker [2] where OK is the ring of integers of a number field K

    Hamming Approximation of NP Witnesses

    Get PDF
    Given a satisfiable 3-SAT formula, how hard is it to find an assignment to the variables that has Hamming distance at most n/2 to a satisfying assignment? More generally, consider any polynomial-time verifier for any NP-complete language. A d(n)-Hamming-approximation algorithm for the verifier is one that, given any member x of the language, outputs in polynomial time a string a with Hamming distance at most d(n) to some witness w, where (x,w) is accepted by the verifier. Previous results have shown that, if P != NP, then every NP-complete language has a verifier for which there is no (n/2-n^(2/3+d))-Hamming-approximation algorithm, for various constants d > 0. Our main result is that, if P != NP, then every paddable NP-complete language has a verifier that admits no (n/2+O(sqrt(n log n)))-Hamming-approximation algorithm. That is, one cannot get even half the bits right. We also consider natural verifiers for various well-known NP-complete problems. They do have n/2-Hamming-approximation algorithms, but, if P != NP, have no (n/2-n^epsilon)-Hamming-approximation algorithms for any constant epsilon > 0. We show similar results for randomized algorithms
    • …
    corecore