310 research outputs found

    Optimal rate list decoding via derivative codes

    Full text link
    The classical family of [n,k]q[n,k]_q Reed-Solomon codes over a field \F_q consist of the evaluations of polynomials f \in \F_q[X] of degree <k< k at nn distinct field elements. In this work, we consider a closely related family of codes, called (order mm) {\em derivative codes} and defined over fields of large characteristic, which consist of the evaluations of ff as well as its first mβˆ’1m-1 formal derivatives at nn distinct field elements. For large enough mm, we show that these codes can be list-decoded in polynomial time from an error fraction approaching 1βˆ’R1-R, where R=k/(nm)R=k/(nm) is the rate of the code. This gives an alternate construction to folded Reed-Solomon codes for achieving the optimal trade-off between rate and list error-correction radius. Our decoding algorithm is linear-algebraic, and involves solving a linear system to interpolate a multivariate polynomial, and then solving another structured linear system to retrieve the list of candidate polynomials ff. The algorithm for derivative codes offers some advantages compared to a similar one for folded Reed-Solomon codes in terms of efficient unique decoding in the presence of side information.Comment: 11 page

    Linear-algebraic list decoding of folded Reed-Solomon codes

    Full text link
    Folded Reed-Solomon codes are an explicit family of codes that achieve the optimal trade-off between rate and error-correction capability: specifically, for any \eps > 0, the author and Rudra (2006,08) presented an n^{O(1/\eps)} time algorithm to list decode appropriate folded RS codes of rate RR from a fraction 1-R-\eps of errors. The algorithm is based on multivariate polynomial interpolation and root-finding over extension fields. It was noted by Vadhan that interpolating a linear polynomial suffices if one settles for a smaller decoding radius (but still enough for a statement of the above form). Here we give a simple linear-algebra based analysis of this variant that eliminates the need for the computationally expensive root-finding step over extension fields (and indeed any mention of extension fields). The entire list decoding algorithm is linear-algebraic, solving one linear system for the interpolation step, and another linear system to find a small subspace of candidate solutions. Except for the step of pruning this subspace, the algorithm can be implemented to run in {\em quadratic} time. The theoretical drawback of folded RS codes are that both the decoding complexity and proven worst-case list-size bound are n^{\Omega(1/\eps)}. By combining the above idea with a pseudorandom subset of all polynomials as messages, we get a Monte Carlo construction achieving a list size bound of O(1/\eps^2) which is quite close to the existential O(1/\eps) bound (however, the decoding complexity remains n^{\Omega(1/\eps)}). Our work highlights that constructing an explicit {\em subspace-evasive} subset that has small intersection with low-dimensional subspaces could lead to explicit codes with better list-decoding guarantees.Comment: 16 pages. Extended abstract in Proc. of IEEE Conference on Computational Complexity (CCC), 201

    Optimal Column-Based Low-Rank Matrix Reconstruction

    Full text link
    We prove that for any real-valued matrix X∈RmΓ—nX \in \R^{m \times n}, and positive integers rβ‰₯kr \ge k, there is a subset of rr columns of XX such that projecting XX onto their span gives a r+1rβˆ’k+1\sqrt{\frac{r+1}{r-k+1}}-approximation to best rank-kk approximation of XX in Frobenius norm. We show that the trade-off we achieve between the number of columns and the approximation ratio is optimal up to lower order terms. Furthermore, there is a deterministic algorithm to find such a subset of columns that runs in O(rnmΟ‰log⁑m)O(r n m^{\omega} \log m) arithmetic operations where Ο‰\omega is the exponent of matrix multiplication. We also give a faster randomized algorithm that runs in O(rnm2)O(r n m^2) arithmetic operations.Comment: 8 page

    On the List-Decodability of Random Linear Rank-Metric Codes

    Full text link
    The list-decodability of random linear rank-metric codes is shown to match that of random rank-metric codes. Specifically, an Fq\mathbb{F}_q-linear rank-metric code over FqmΓ—n\mathbb{F}_q^{m \times n} of rate R=(1βˆ’Ο)(1βˆ’nmρ)βˆ’Ξ΅R = (1-\rho)(1-\frac{n}{m}\rho)-\varepsilon is shown to be (with high probability) list-decodable up to fractional radius ρ∈(0,1)\rho \in (0,1) with lists of size at most Cρ,qΞ΅\frac{C_{\rho,q}}{\varepsilon}, where Cρ,qC_{\rho,q} is a constant depending only on ρ\rho and qq. This matches the bound for random rank-metric codes (up to constant factors). The proof adapts the approach of Guruswami, H\aa stad, Kopparty (STOC 2010), who established a similar result for the Hamming metric case, to the rank-metric setting

    Combinatorial limitations of average-radius list-decoding

    Full text link
    We study certain combinatorial aspects of list-decoding, motivated by the exponential gap between the known upper bound (of O(1/Ξ³)O(1/\gamma)) and lower bound (of Ξ©p(log⁑(1/Ξ³))\Omega_p(\log (1/\gamma))) for the list-size needed to decode up to radius pp with rate Ξ³\gamma away from capacity, i.e., 1-\h(p)-\gamma (here p∈(0,1/2)p\in (0,1/2) and Ξ³>0\gamma > 0). Our main result is the following: We prove that in any binary code CβŠ†{0,1}nC \subseteq \{0,1\}^n of rate 1-\h(p)-\gamma, there must exist a set LβŠ‚C\mathcal{L} \subset C of Ξ©p(1/Ξ³)\Omega_p(1/\sqrt{\gamma}) codewords such that the average distance of the points in L\mathcal{L} from their centroid is at most pnpn. In other words, there must exist Ξ©p(1/Ξ³)\Omega_p(1/\sqrt{\gamma}) codewords with low "average radius." The standard notion of list-decoding corresponds to working with the maximum distance of a collection of codewords from a center instead of average distance. The average-radius form is in itself quite natural and is implied by the classical Johnson bound. The remaining results concern the standard notion of list-decoding, and help clarify the combinatorial landscape of list-decoding: 1. We give a short simple proof, over all fixed alphabets, of the above-mentioned Ξ©p(log⁑(Ξ³))\Omega_p(\log (\gamma)) lower bound. Earlier, this bound followed from a complicated, more general result of Blinovsky. 2. We show that one {\em cannot} improve the Ξ©p(log⁑(1/Ξ³))\Omega_p(\log (1/\gamma)) lower bound via techniques based on identifying the zero-rate regime for list decoding of constant-weight codes. 3. We show a "reverse connection" showing that constant-weight codes for list decoding imply general codes for list decoding with higher rate. 4. We give simple second moment based proofs of tight (up to constant factors) lower bounds on the list-size needed for list decoding random codes and random linear codes from errors as well as erasures.Comment: 28 pages. Extended abstract in RANDOM 201
    • …
    corecore