413 research outputs found

    Subspace Evasive Sets

    Full text link
    In this work we describe an explicit, simple, construction of large subsets of F^n, where F is a finite field, that have small intersection with every k-dimensional affine subspace. Interest in the explicit construction of such sets, termed subspace-evasive sets, started in the work of Pudlak and Rodl (2004) who showed how such constructions over the binary field can be used to construct explicit Ramsey graphs. More recently, Guruswami (2011) showed that, over large finite fields (of size polynomial in n), subspace evasive sets can be used to obtain explicit list-decodable codes with optimal rate and constant list-size. In this work we construct subspace evasive sets over large fields and use them to reduce the list size of folded Reed-Solomon codes form poly(n) to a constant.Comment: 16 page

    Linear-algebraic list decoding of folded Reed-Solomon codes

    Full text link
    Folded Reed-Solomon codes are an explicit family of codes that achieve the optimal trade-off between rate and error-correction capability: specifically, for any \eps > 0, the author and Rudra (2006,08) presented an n^{O(1/\eps)} time algorithm to list decode appropriate folded RS codes of rate RR from a fraction 1-R-\eps of errors. The algorithm is based on multivariate polynomial interpolation and root-finding over extension fields. It was noted by Vadhan that interpolating a linear polynomial suffices if one settles for a smaller decoding radius (but still enough for a statement of the above form). Here we give a simple linear-algebra based analysis of this variant that eliminates the need for the computationally expensive root-finding step over extension fields (and indeed any mention of extension fields). The entire list decoding algorithm is linear-algebraic, solving one linear system for the interpolation step, and another linear system to find a small subspace of candidate solutions. Except for the step of pruning this subspace, the algorithm can be implemented to run in {\em quadratic} time. The theoretical drawback of folded RS codes are that both the decoding complexity and proven worst-case list-size bound are n^{\Omega(1/\eps)}. By combining the above idea with a pseudorandom subset of all polynomials as messages, we get a Monte Carlo construction achieving a list size bound of O(1/\eps^2) which is quite close to the existential O(1/\eps) bound (however, the decoding complexity remains n^{\Omega(1/\eps)}). Our work highlights that constructing an explicit {\em subspace-evasive} subset that has small intersection with low-dimensional subspaces could lead to explicit codes with better list-decoding guarantees.Comment: 16 pages. Extended abstract in Proc. of IEEE Conference on Computational Complexity (CCC), 201

    Static Data Structure Lower Bounds Imply Rigidity

    Full text link
    We show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of tω(log2n)t \geq \omega(\log^2 n) on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small linear space (s=(1+ε)n)(s= (1+\varepsilon)n), would already imply a semi-explicit (PNP\bf P^{NP}\rm) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy and Yekhanin, 2009). Our results further assert that polynomial (tnδt\geq n^{\delta}) data structure lower bounds against near-optimal space, would imply super-linear circuit lower bounds for log-depth linear circuits (a four-decade open question). In the succinct space regime (s=n+o(n))(s=n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our results rely on a new connection between the "inner" and "outer" dimensions of a matrix (Paturi and Pudlak, 2006), and on a new reduction from worst-case to average-case rigidity, which is of independent interest

    Optimal rate list decoding via derivative codes

    Full text link
    The classical family of [n,k]q[n,k]_q Reed-Solomon codes over a field \F_q consist of the evaluations of polynomials f \in \F_q[X] of degree <k< k at nn distinct field elements. In this work, we consider a closely related family of codes, called (order mm) {\em derivative codes} and defined over fields of large characteristic, which consist of the evaluations of ff as well as its first m1m-1 formal derivatives at nn distinct field elements. For large enough mm, we show that these codes can be list-decoded in polynomial time from an error fraction approaching 1R1-R, where R=k/(nm)R=k/(nm) is the rate of the code. This gives an alternate construction to folded Reed-Solomon codes for achieving the optimal trade-off between rate and list error-correction radius. Our decoding algorithm is linear-algebraic, and involves solving a linear system to interpolate a multivariate polynomial, and then solving another structured linear system to retrieve the list of candidate polynomials ff. The algorithm for derivative codes offers some advantages compared to a similar one for folded Reed-Solomon codes in terms of efficient unique decoding in the presence of side information.Comment: 11 page
    corecore