2,190 research outputs found

    Algebraic List-decoding of Subspace Codes

    Full text link
    Subspace codes were introduced in order to correct errors and erasures for randomized network coding, in the case where network topology is unknown (the noncoherent case). Subspace codes are indeed collections of subspaces of a certain vector space over a finite field. The Koetter-Kschischang construction of subspace codes are similar to Reed-Solomon codes in that codewords are obtained by evaluating certain (linearized) polynomials. In this paper, we consider the problem of list-decoding the Koetter-Kschischang subspace codes. In a sense, we are able to achieve for these codes what Sudan was able to achieve for Reed-Solomon codes. In order to do so, we have to modify and generalize the original Koetter-Kschischang construction in many important respects. The end result is this: for any integer LL, our list-LL decoder guarantees successful recovery of the message subspace provided that the normalized dimension of the error is at most L−L(L+1)2R L - \frac{L(L+1)}{2}R where RR is the normalized packet rate. Just as in the case of Sudan's list-decoding algorithm, this exceeds the previously best known error-correction radius 1−R1-R, demonstrated by Koetter and Kschischang, for low rates RR

    Evading Subspaces Over Large Fields and Explicit List-decodable Rank-metric Codes

    Get PDF
    We construct an explicit family of linear rank-metric codes over any field F that enables efficient list decoding up to a fraction rho of errors in the rank metric with a rate of 1-rho-eps, for any desired rho in (0,1) and eps > 0. Previously, a Monte Carlo construction of such codes was known, but this is in fact the first explicit construction of positive rate rank-metric codes for list decoding beyond the unique decoding radius. Our codes are explicit subcodes of the well-known Gabidulin codes, which encode linearized polynomials of low degree via their values at a collection of linearly independent points. The subcode is picked by restricting the message polynomials to an F-subspace that evades certain structured subspaces over an extension field of F. These structured spaces arise from the linear-algebraic list decoder for Gabidulin codes due to Guruswami and Xing (STOC\u2713). Our construction is obtained by combining subspace designs constructed by Guruswami and Kopparty (FOCS\u2713) with subspace-evasive varieties due to Dvir and Lovett (STOC\u2712). We establish a similar result for subspace codes, which are a collection of subspaces, every pair of which have low-dimensional intersection, and which have received much attention recently in the context of network coding. We also give explicit subcodes of folded Reed-Solomon (RS) codes with small folding order that are list-decodable (in the Hamming metric) with optimal redundancy, motivated by the fact that list decoding RS codes reduces to list decoding such folded RS codes. However, as we only list decode a subcode of these codes, the Johnson radius continues to be the best known error fraction for list decoding RS codes

    Linear-algebraic list decoding of folded Reed-Solomon codes

    Full text link
    Folded Reed-Solomon codes are an explicit family of codes that achieve the optimal trade-off between rate and error-correction capability: specifically, for any \eps > 0, the author and Rudra (2006,08) presented an n^{O(1/\eps)} time algorithm to list decode appropriate folded RS codes of rate RR from a fraction 1-R-\eps of errors. The algorithm is based on multivariate polynomial interpolation and root-finding over extension fields. It was noted by Vadhan that interpolating a linear polynomial suffices if one settles for a smaller decoding radius (but still enough for a statement of the above form). Here we give a simple linear-algebra based analysis of this variant that eliminates the need for the computationally expensive root-finding step over extension fields (and indeed any mention of extension fields). The entire list decoding algorithm is linear-algebraic, solving one linear system for the interpolation step, and another linear system to find a small subspace of candidate solutions. Except for the step of pruning this subspace, the algorithm can be implemented to run in {\em quadratic} time. The theoretical drawback of folded RS codes are that both the decoding complexity and proven worst-case list-size bound are n^{\Omega(1/\eps)}. By combining the above idea with a pseudorandom subset of all polynomials as messages, we get a Monte Carlo construction achieving a list size bound of O(1/\eps^2) which is quite close to the existential O(1/\eps) bound (however, the decoding complexity remains n^{\Omega(1/\eps)}). Our work highlights that constructing an explicit {\em subspace-evasive} subset that has small intersection with low-dimensional subspaces could lead to explicit codes with better list-decoding guarantees.Comment: 16 pages. Extended abstract in Proc. of IEEE Conference on Computational Complexity (CCC), 201

    It'll probably work out: improved list-decoding through random operations

    Full text link
    In this work, we introduce a framework to study the effect of random operations on the combinatorial list-decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural transformations on codes, such as puncturing, folding, and taking subcodes; we show that many such operations can improve the list-decoding properties of a code. There are two main points to this. First, our goal is to advance our (combinatorial) understanding of list-decodability, by understanding what structure (or lack thereof) is necessary to obtain it. Second, we use our more general results to obtain a few interesting corollaries for list decoding: (1) We show the existence of binary codes that are combinatorially list-decodable from 1/2−ϵ1/2-\epsilon fraction of errors with optimal rate Ω(ϵ2)\Omega(\epsilon^2) that can be encoded in linear time. (2) We show that any code with Ω(1)\Omega(1) relative distance, when randomly folded, is combinatorially list-decodable 1−ϵ1-\epsilon fraction of errors with high probability. This formalizes the intuition for why the folding operation has been successful in obtaining codes with optimal list decoding parameters; previously, all arguments used algebraic methods and worked only with specific codes. (3) We show that any code which is list-decodable with suboptimal list sizes has many subcodes which have near-optimal list sizes, while retaining the error correcting capabilities of the original code. This generalizes recent results where subspace evasive sets have been used to reduce list sizes of codes that achieve list decoding capacity
    • …
    corecore