36 research outputs found

    Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers

    Full text link
    We extend the "method of multiplicities" to get the following results, of interest in combinatorics and randomness extraction. (A) We show that every Kakeya set (a set of points that contains a line in every direction) in \F_q^n must be of size at least qn/2nq^n/2^n. This bound is tight to within a 2+o(1)2 + o(1) factor for every nn as qq \to \infty, compared to previous bounds that were off by exponential factors in nn. (B) We give improved randomness extractors and "randomness mergers". Mergers are seeded functions that take as input Λ\Lambda (possibly correlated) random variables in {0,1}N\{0,1\}^N and a short random seed and output a single random variable in {0,1}N\{0,1\}^N that is statistically close to having entropy (1δ)N(1-\delta) \cdot N when one of the Λ\Lambda input variables is distributed uniformly. The seed we require is only (1/δ)logΛ(1/\delta)\cdot \log \Lambda-bits long, which significantly improves upon previous construction of mergers. (C) Using our new mergers, we show how to construct randomness extractors that use logarithmic length seeds while extracting 1o(1)1 - o(1) fraction of the min-entropy of the source. The "method of multiplicities", as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset {\em with high multiplicity}. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple bounds on the number of zeroes to complete the analysis. Our augmentation to this technique is that we prove, under appropriate conditions, that the interpolating polynomial vanishes {\em with high multiplicity} outside the set. This novelty leads to significantly tighter analyses.Comment: 26 pages, now includes extractors with sublinear entropy los

    Better lossless condensers through derandomized curve samplers

    Get PDF
    Lossless condensers are unbalanced expander graphs, with expansion close to optimal. Equivalently, they may be viewed as functions that use a short random seed to map a source on n bits to a source on many fewer bits while preserving all of the min-entropy. It is known how to build lossless condensers when the graphs are slightly unbalanced in the work of M. Capalbo et al. (2002). The highly unbalanced case is also important but the only known construction does not condense the source well. We give explicit constructions of lossless condensers with condensing close to optimal, and using near-optimal seed length. Our main technical contribution is a randomness-efficient method for sampling FD (where F is a field) with low-degree curves. This problem was addressed before in the works of E. Ben-Sasson et al. (2003) and D. Moshkovitz and R. Raz (2006) but the solutions apply only to degree one curves, i.e., lines. Our technique is new and elegant. We use sub-sampling and obtain our curve samplers by composing a sequence of low-degree manifolds, starting with high-dimension, low-degree manifolds and proceeding through lower and lower dimension manifolds with (moderately) growing degrees, until we finish with dimension-one, low-degree manifolds, i.e., curves. The technique may be of independent interest

    Unbalanced Expanders from Multiplicity Codes

    Get PDF
    In 2007 Guruswami, Umans and Vadhan gave an explicit construction of a lossless condenser based on Parvaresh-Vardy codes. This lossless condenser is a basic building block in many constructions, and, in particular, is behind the state of the art extractor constructions. We give an alternative construction that is based on Multiplicity codes. While the bottom-line result is similar to the GUV result, the analysis is very different. In GUV (and Parvaresh-Vardy codes) the polynomial ring is closed to a finite field, and every polynomial is associated with related elements in the finite field. In our construction a polynomial from the polynomial ring is associated with its iterated derivatives. Our analysis boils down to solving a differential equation over a finite field, and uses previous techniques, introduced by Kopparty (in [Swastik Kopparty, 2015]) for the list-decoding setting. We also observe that these (and more general) questions were studied in differential algebra, and we use the terminology and result developed there. We believe these techniques have the potential of getting better constructions and solving the current bottlenecks in the area

    Noise-Resilient Group Testing: Limitations and Constructions

    Full text link
    We study combinatorial group testing schemes for learning dd-sparse Boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noise-resilient scheme in this model can only approximately reconstruct the sparse vector. On the positive side, we take this barrier to our advantage and show that approximate reconstruction (within a satisfactory degree of approximation) allows us to break the information theoretic lower bound of Ω~(d2logn)\tilde{\Omega}(d^2 \log n) that is known for exact reconstruction of dd-sparse vectors of length nn via non-adaptive measurements, by a multiplicative factor Ω~(d)\tilde{\Omega}(d). Specifically, we give simple randomized constructions of non-adaptive measurement schemes, with m=O(dlogn)m=O(d \log n) measurements, that allow efficient reconstruction of dd-sparse vectors up to O(d)O(d) false positives even in the presence of δm\delta m false positives and O(m/d)O(m/d) false negatives within the measurement outcomes, for any constant δ<1\delta < 1. We show that, information theoretically, none of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit constructions, in particular one matching the randomized trade-off but using m=O(d1+o(1)logn)m = O(d^{1+o(1)} \log n) measurements. We also obtain explicit constructions that allow fast reconstruction in time \poly(m), which would be sublinear in nn for sufficiently sparse vectors. The main tool used in our construction is the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the same title) in proceedings of the 17th International Symposium on Fundamentals of Computation Theory (FCT 2009

    Algebraic and Combinatorial Methods in Computational Complexity

    Get PDF
    At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings

    Nearly Optimal Deterministic Algorithm for Sparse Walsh-Hadamard Transform

    Get PDF
    For every fixed constant α>0\alpha > 0, we design an algorithm for computing the kk-sparse Walsh-Hadamard transform of an NN-dimensional vector xRNx \in \mathbb{R}^N in time k1+α(logN)O(1)k^{1+\alpha} (\log N)^{O(1)}. Specifically, the algorithm is given query access to xx and computes a kk-sparse x~RN\tilde{x} \in \mathbb{R}^N satisfying x~x^1cx^Hk(x^)1\|\tilde{x} - \hat{x}\|_1 \leq c \|\hat{x} - H_k(\hat{x})\|_1, for an absolute constant c>0c > 0, where x^\hat{x} is the transform of xx and Hk(x^)H_k(\hat{x}) is its best kk-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to xx (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive 1/1\ell_1/\ell_1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1+α(logN)O(1)k^{1+\alpha} (\log N)^{O(1)} (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008). Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k(logN)O(1)k (\log N)^{O(1)} reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter α\alpha). Finally, by allowing the algorithm to use randomness, while still using non-adaptive queries, the running time of the algorithm can be improved to O~(klog3N)\tilde{O}(k \log^3 N)
    corecore