35 research outputs found

    On the List-Decodability of Random Linear Rank-Metric Codes

    Full text link
    The list-decodability of random linear rank-metric codes is shown to match that of random rank-metric codes. Specifically, an Fq\mathbb{F}_q-linear rank-metric code over Fqm×n\mathbb{F}_q^{m \times n} of rate R=(1ρ)(1nmρ)εR = (1-\rho)(1-\frac{n}{m}\rho)-\varepsilon is shown to be (with high probability) list-decodable up to fractional radius ρ(0,1)\rho \in (0,1) with lists of size at most Cρ,qε\frac{C_{\rho,q}}{\varepsilon}, where Cρ,qC_{\rho,q} is a constant depending only on ρ\rho and qq. This matches the bound for random rank-metric codes (up to constant factors). The proof adapts the approach of Guruswami, H\aa stad, Kopparty (STOC 2010), who established a similar result for the Hamming metric case, to the rank-metric setting

    Evading Subspaces Over Large Fields and Explicit List-decodable Rank-metric Codes

    Get PDF
    We construct an explicit family of linear rank-metric codes over any field F that enables efficient list decoding up to a fraction rho of errors in the rank metric with a rate of 1-rho-eps, for any desired rho in (0,1) and eps > 0. Previously, a Monte Carlo construction of such codes was known, but this is in fact the first explicit construction of positive rate rank-metric codes for list decoding beyond the unique decoding radius. Our codes are explicit subcodes of the well-known Gabidulin codes, which encode linearized polynomials of low degree via their values at a collection of linearly independent points. The subcode is picked by restricting the message polynomials to an F-subspace that evades certain structured subspaces over an extension field of F. These structured spaces arise from the linear-algebraic list decoder for Gabidulin codes due to Guruswami and Xing (STOC\u2713). Our construction is obtained by combining subspace designs constructed by Guruswami and Kopparty (FOCS\u2713) with subspace-evasive varieties due to Dvir and Lovett (STOC\u2712). We establish a similar result for subspace codes, which are a collection of subspaces, every pair of which have low-dimensional intersection, and which have received much attention recently in the context of network coding. We also give explicit subcodes of folded Reed-Solomon (RS) codes with small folding order that are list-decodable (in the Hamming metric) with optimal redundancy, motivated by the fact that list decoding RS codes reduces to list decoding such folded RS codes. However, as we only list decode a subcode of these codes, the Johnson radius continues to be the best known error fraction for list decoding RS codes

    Bounds on List Decoding of Rank-Metric Codes

    Full text link
    So far, there is no polynomial-time list decoding algorithm (beyond half the minimum distance) for Gabidulin codes. These codes can be seen as the rank-metric equivalent of Reed--Solomon codes. In this paper, we provide bounds on the list size of rank-metric codes in order to understand whether polynomial-time list decoding is possible or whether it works only with exponential time complexity. Three bounds on the list size are proven. The first one is a lower exponential bound for Gabidulin codes and shows that for these codes no polynomial-time list decoding beyond the Johnson radius exists. Second, an exponential upper bound is derived, which holds for any rank-metric code of length nn and minimum rank distance dd. The third bound proves that there exists a rank-metric code over \Fqm of length nmn \leq m such that the list size is exponential in the length for any radius greater than half the minimum rank distance. This implies that there cannot exist a polynomial upper bound depending only on nn and dd similar to the Johnson bound in Hamming metric. All three rank-metric bounds reveal significant differences to bounds for codes in Hamming metric.Comment: 10 pages, 2 figures, submitted to IEEE Transactions on Information Theory, short version presented at ISIT 201

    It'll probably work out: improved list-decoding through random operations

    Full text link
    In this work, we introduce a framework to study the effect of random operations on the combinatorial list-decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural transformations on codes, such as puncturing, folding, and taking subcodes; we show that many such operations can improve the list-decoding properties of a code. There are two main points to this. First, our goal is to advance our (combinatorial) understanding of list-decodability, by understanding what structure (or lack thereof) is necessary to obtain it. Second, we use our more general results to obtain a few interesting corollaries for list decoding: (1) We show the existence of binary codes that are combinatorially list-decodable from 1/2ϵ1/2-\epsilon fraction of errors with optimal rate Ω(ϵ2)\Omega(\epsilon^2) that can be encoded in linear time. (2) We show that any code with Ω(1)\Omega(1) relative distance, when randomly folded, is combinatorially list-decodable 1ϵ1-\epsilon fraction of errors with high probability. This formalizes the intuition for why the folding operation has been successful in obtaining codes with optimal list decoding parameters; previously, all arguments used algebraic methods and worked only with specific codes. (3) We show that any code which is list-decodable with suboptimal list sizes has many subcodes which have near-optimal list sizes, while retaining the error correcting capabilities of the original code. This generalizes recent results where subspace evasive sets have been used to reduce list sizes of codes that achieve list decoding capacity
    corecore