38 research outputs found

    Subspace Evasive Sets

    Full text link
    In this work we describe an explicit, simple, construction of large subsets of F^n, where F is a finite field, that have small intersection with every k-dimensional affine subspace. Interest in the explicit construction of such sets, termed subspace-evasive sets, started in the work of Pudlak and Rodl (2004) who showed how such constructions over the binary field can be used to construct explicit Ramsey graphs. More recently, Guruswami (2011) showed that, over large finite fields (of size polynomial in n), subspace evasive sets can be used to obtain explicit list-decodable codes with optimal rate and constant list-size. In this work we construct subspace evasive sets over large fields and use them to reduce the list size of folded Reed-Solomon codes form poly(n) to a constant.Comment: 16 page

    Efficient and Robust Compressed Sensing Using Optimized Expander Graphs

    Get PDF
    Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any n-dimensional vector that is k-sparse can be fully recovered using O(klog n) measurements and only O(klog n) simple recovery iterations. In this paper, we improve upon this result by considering expander graphs with expansion coefficient beyond 3/4 and show that, with the same number of measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be reduced arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple priority queue with total recovery time O(nlog(n/k))). We also show that by tolerating a small penal- ty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expander-graph-based methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the time complexity and the simplicity of recovery. Finally, we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost k-sparse signal and then, using very simple optimization techniques, finds a k-sparse signal which is close to the best k-term approximation of the original signal

    Fast Computation of Minimal Interpolation Bases in Popov Form for Arbitrary Shifts

    Get PDF
    We compute minimal bases of solutions for a general interpolation problem, which encompasses Hermite-Pad\'e approximation and constrained multivariate interpolation, and has applications in coding theory and security. This problem asks to find univariate polynomial relations between mm vectors of size σ\sigma; these relations should have small degree with respect to an input degree shift. For an arbitrary shift, we propose an algorithm for the computation of an interpolation basis in shifted Popov normal form with a cost of O ~(mω−1σ)\mathcal{O}\tilde{~}(m^{\omega-1} \sigma) field operations, where ω\omega is the exponent of matrix multiplication and the notation O ~(⋅)\mathcal{O}\tilde{~}(\cdot) indicates that logarithmic terms are omitted. Earlier works, in the case of Hermite-Pad\'e approximation and in the general interpolation case, compute non-normalized bases. Since for arbitrary shifts such bases may have size Θ(m2σ)\Theta(m^2 \sigma), the cost bound O ~(mω−1σ)\mathcal{O}\tilde{~}(m^{\omega-1} \sigma) was feasible only with restrictive assumptions on the shift that ensure small output sizes. The question of handling arbitrary shifts with the same complexity bound was left open. To obtain the target cost for any shift, we strengthen the properties of the output bases, and of those obtained during the course of the algorithm: all the bases are computed in shifted Popov form, whose size is always O(mσ)\mathcal{O}(m \sigma). Then, we design a divide-and-conquer scheme. We recursively reduce the initial interpolation problem to sub-problems with more convenient shifts by first computing information on the degrees of the intermediate bases.Comment: 8 pages, sig-alternate class, 4 figures (problems and algorithms

    Some remarks on multiplicity codes

    Full text link
    Multiplicity codes are algebraic error-correcting codes generalizing classical polynomial evaluation codes, and are based on evaluating polynomials and their derivatives. This small augmentation confers upon them better local decoding, list-decoding and local list-decoding algorithms than their classical counterparts. We survey what is known about these codes, present some variations and improvements, and finally list some interesting open problems.Comment: 21 pages in Discrete Geometry and Algebraic Combinatorics, AMS Contemporary Mathematics Series, 201

    Subspace polynomials and list decoding of Reed-Solomon codes

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2007.Includes bibliographical references (p. 29-31).We show combinatorial limitations on efficient list decoding of Reed-Solomon codes beyond the Johnson and Guruswami-Sudan bounds [Joh62, Joh63, GS99]. In particular, we show that for any ... , there exist arbitrarily large fields ... * Existence: there exists a received word ... that agrees with a super-polynomial number of distinct degree K polynomials on ... points each; * Explicit: there exists a polynomial time constructible received word ... that agrees with a super-polynomial number of distinct degree K polynomials, on ... points each. Ill both cases, our results improve upon the previous state of the art, which was , NM/6 for the existence case [JH01], and a ... for the explicit one [GR,05b]. Furthermore, for 6 close to 1 our bound approaches the Guruswami-Sudan bound (which is ... ) and rules out the possibility of extending their efficient RS list decoding algorithm to any significantly larger decoding radius. Our proof method is surprisingly simple. We work with polynomials that vanish on subspaces of an extension field viewed as a vector space over the base field.(cont.) These subspace polynomials are a subclass of linearized polynomials that were studied by Ore [Ore33, Ore34] in the 1930s and by coding theorists. For us their main attraction is their sparsity and abundance of roots. We also complement our negative results by giving a list decoding algorithm for linearized polynomials beyond the Johnson-Guruswami-Sudan bounds.by Swastik Kopparty.S.M

    Using Reed-Solomon codes in the (U | U + V ) construction and an application to cryptography

    Get PDF
    International audience—In this paper we present a modification of Reed-Solomon codes that beats the Guruswami-Sudan 1 − √ R decoding radius of Reed-Solomon codes at low rates R. The idea is to choose Reed-Solomon codes U and V with appropriate rates in a (U | U + V) construction and to decode them with the Koetter-Vardy soft information decoder. We suggest to use a slightly more general version of these codes (but which has the same decoding performance as the (U | U + V)-construction) for being used in code-based cryptography , namely to build a McEliece scheme. The point is here that these codes not only perform nearly as well (or even better in the low rate regime) as Reed-Solomon codes, but also that their structure seems to avoid the Sidelnikov-Shestakov attack which broke a previous McEliece proposal based on generalized Reed-Solomon codes

    Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers

    Full text link
    We extend the "method of multiplicities" to get the following results, of interest in combinatorics and randomness extraction. (A) We show that every Kakeya set (a set of points that contains a line in every direction) in \F_q^n must be of size at least qn/2nq^n/2^n. This bound is tight to within a 2+o(1)2 + o(1) factor for every nn as q→∞q \to \infty, compared to previous bounds that were off by exponential factors in nn. (B) We give improved randomness extractors and "randomness mergers". Mergers are seeded functions that take as input Λ\Lambda (possibly correlated) random variables in {0,1}N\{0,1\}^N and a short random seed and output a single random variable in {0,1}N\{0,1\}^N that is statistically close to having entropy (1−ή)⋅N(1-\delta) \cdot N when one of the Λ\Lambda input variables is distributed uniformly. The seed we require is only (1/ή)⋅log⁡Λ(1/\delta)\cdot \log \Lambda-bits long, which significantly improves upon previous construction of mergers. (C) Using our new mergers, we show how to construct randomness extractors that use logarithmic length seeds while extracting 1−o(1)1 - o(1) fraction of the min-entropy of the source. The "method of multiplicities", as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset {\em with high multiplicity}. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple bounds on the number of zeroes to complete the analysis. Our augmentation to this technique is that we prove, under appropriate conditions, that the interpolating polynomial vanishes {\em with high multiplicity} outside the set. This novelty leads to significantly tighter analyses.Comment: 26 pages, now includes extractors with sublinear entropy los
    corecore