15 research outputs found

    List Decoding of Locally Repairable Codes

    Full text link
    We show that locally repairable codes (LRCs) can be list decoded efficiently beyond the Johnson radius for a large range of parameters by utilizing the local error correction capabilities. The new decoding radius is derived and the asymptotic behavior is analyzed. We give a general list decoding algorithm for LRCs that achieves this radius along with an explicit realization for a class of LRCs based on Reed-Solomon codes (Tamo-Barg LRCs). Further, a probabilistic algorithm for unique decoding of low complexity is given and its success probability analyzed

    On the List Recoverability of Randomly Punctured Codes

    Get PDF
    We show that a random puncturing of a code with good distance is list recoverable beyond the Johnson bound. In particular, this implies that there are Reed-Solomon codes that are list recoverable beyond the Johnson bound. It was previously known that there are Reed-Solomon codes that do not have this property. As an immediate corollary to our main theorem, we obtain better degree bounds on unbalanced expanders that come from Reed-Solomon codes

    On the List-Decodability of Random Linear Rank-Metric Codes

    Full text link
    The list-decodability of random linear rank-metric codes is shown to match that of random rank-metric codes. Specifically, an Fq\mathbb{F}_q-linear rank-metric code over Fqm×n\mathbb{F}_q^{m \times n} of rate R=(1ρ)(1nmρ)εR = (1-\rho)(1-\frac{n}{m}\rho)-\varepsilon is shown to be (with high probability) list-decodable up to fractional radius ρ(0,1)\rho \in (0,1) with lists of size at most Cρ,qε\frac{C_{\rho,q}}{\varepsilon}, where Cρ,qC_{\rho,q} is a constant depending only on ρ\rho and qq. This matches the bound for random rank-metric codes (up to constant factors). The proof adapts the approach of Guruswami, H\aa stad, Kopparty (STOC 2010), who established a similar result for the Hamming metric case, to the rank-metric setting

    It'll probably work out: improved list-decoding through random operations

    Full text link
    In this work, we introduce a framework to study the effect of random operations on the combinatorial list-decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural transformations on codes, such as puncturing, folding, and taking subcodes; we show that many such operations can improve the list-decoding properties of a code. There are two main points to this. First, our goal is to advance our (combinatorial) understanding of list-decodability, by understanding what structure (or lack thereof) is necessary to obtain it. Second, we use our more general results to obtain a few interesting corollaries for list decoding: (1) We show the existence of binary codes that are combinatorially list-decodable from 1/2ϵ1/2-\epsilon fraction of errors with optimal rate Ω(ϵ2)\Omega(\epsilon^2) that can be encoded in linear time. (2) We show that any code with Ω(1)\Omega(1) relative distance, when randomly folded, is combinatorially list-decodable 1ϵ1-\epsilon fraction of errors with high probability. This formalizes the intuition for why the folding operation has been successful in obtaining codes with optimal list decoding parameters; previously, all arguments used algebraic methods and worked only with specific codes. (3) We show that any code which is list-decodable with suboptimal list sizes has many subcodes which have near-optimal list sizes, while retaining the error correcting capabilities of the original code. This generalizes recent results where subspace evasive sets have been used to reduce list sizes of codes that achieve list decoding capacity

    Randomly punctured Reed--Solomon codes achieve list-decoding capacity over linear-sized fields

    Full text link
    Reed--Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed--Solomon codes can optimally achieve list-decoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed--Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed--Solomon codes over an exponentially large field size 2O(n)2^{O(n)}, where nn is the block length of the code. A natural question is whether Reed--Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed--Solomon codes are list-decodable to capacity with field size O(n2)O(n^2). We show that Reed--Solomon codes are list-decodable to capacity with linear field size O(n)O(n), which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size qq and code length nn cannot be bounded by an absolute constant. Our proof is based on the proof of Guo and Zhang, and additionally exploits symmetries of reduced intersection matrices. With our proof, which maintains a hypergraph perspective of the list-decoding problem, we include an alternate presentation of ideas of Brakensiek, Gopi, and Makam that more directly connects the list-decoding problem to the GM-MDS theorem via a hypergraph orientation theorem

    On Generalization Bounds for Projective Clustering

    Full text link
    Given a set of points, clustering consists of finding a partition of a point set into kk clusters such that the center to which a point is assigned is as close as possible. Most commonly, centers are points themselves, which leads to the famous kk-median and kk-means objectives. One may also choose centers to be jj dimensional subspaces, which gives rise to subspace clustering. In this paper, we consider learning bounds for these problems. That is, given a set of nn samples PP drawn independently from some unknown, but fixed distribution D\mathcal{D}, how quickly does a solution computed on PP converge to the optimal clustering of D\mathcal{D}? We give several near optimal results. In particular, For center-based objectives, we show a convergence rate of O~(k/n)\tilde{O}\left(\sqrt{{k}/{n}}\right). This matches the known optimal bounds of [Fefferman, Mitter, and Narayanan, Journal of the Mathematical Society 2016] and [Bartlett, Linder, and Lugosi, IEEE Trans. Inf. Theory 1998] for kk-means and extends it to other important objectives such as kk-median. For subspace clustering with jj-dimensional subspaces, we show a convergence rate of O~(kj2n)\tilde{O}\left(\sqrt{\frac{kj^2}{n}}\right). These are the first provable bounds for most of these problems. For the specific case of projective clustering, which generalizes kk-means, we show a convergence rate of Ω(kjn)\Omega\left(\sqrt{\frac{kj}{n}}\right) is necessary, thereby proving that the bounds from [Fefferman, Mitter, and Narayanan, Journal of the Mathematical Society 2016] are essentially optimal
    corecore