33 research outputs found

    Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers

    Full text link
    We extend the "method of multiplicities" to get the following results, of interest in combinatorics and randomness extraction. (A) We show that every Kakeya set (a set of points that contains a line in every direction) in \F_q^n must be of size at least qn/2nq^n/2^n. This bound is tight to within a 2+o(1)2 + o(1) factor for every nn as qq \to \infty, compared to previous bounds that were off by exponential factors in nn. (B) We give improved randomness extractors and "randomness mergers". Mergers are seeded functions that take as input Λ\Lambda (possibly correlated) random variables in {0,1}N\{0,1\}^N and a short random seed and output a single random variable in {0,1}N\{0,1\}^N that is statistically close to having entropy (1δ)N(1-\delta) \cdot N when one of the Λ\Lambda input variables is distributed uniformly. The seed we require is only (1/δ)logΛ(1/\delta)\cdot \log \Lambda-bits long, which significantly improves upon previous construction of mergers. (C) Using our new mergers, we show how to construct randomness extractors that use logarithmic length seeds while extracting 1o(1)1 - o(1) fraction of the min-entropy of the source. The "method of multiplicities", as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset {\em with high multiplicity}. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple bounds on the number of zeroes to complete the analysis. Our augmentation to this technique is that we prove, under appropriate conditions, that the interpolating polynomial vanishes {\em with high multiplicity} outside the set. This novelty leads to significantly tighter analyses.Comment: 26 pages, now includes extractors with sublinear entropy los

    Kakeya sets and the method of multiplicities

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 51-53).We extend the "method of multiplicities" to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set (a set of points that contains a line in every direction) in F' must be of size at least qn/2n. This bound is tight to within a 2 + o(1) factor for every n as q -- oc, compared to previous bounds that were off by exponential factors in n. 2. We give improved randomness extractors and "randomness mergers". Mergers are seeded functions that take as input A (possibly correlated) random variables in {0, 1}N and a short random seed and output a single random variable in {0, 1}N that is statistically close to having entropy (1 - 6) - N when one of the A input variables is distributed uniformly. The seed we require is only (1/6) - log A-bits long, which significantly improves upon previous construction of mergers. 3. Using our new mergers, we show how to construct randomness extractors that use logarithmic length seeds while extracting 1- o(1) fraction of the min-entropy of the source. The "method of multiplicities", as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset with high multiplicity. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple bounds on the number of zeroes to complete the analysis. Our augmentation to this technique is that we prove, under appropriate conditions, that the interpolating polynomial vanishes with high multiplicity outside the set. This novelty leads to significantly tighter analyses.by Shubhangi Saraf.S.M

    Extracting Mergers and Projections of Partitions

    Get PDF

    Better short-seed quantum-proof extractors

    Get PDF
    We construct a strong extractor against quantum storage that works for every min-entropy kk, has logarithmic seed length, and outputs Ω(k)\Omega(k) bits, provided that the quantum adversary has at most βk\beta k qubits of memory, for any \beta < \half. The construction works by first condensing the source (with minimal entropy-loss) and then applying an extractor that works well against quantum adversaries when the source is close to uniform. We also obtain an improved construction of a strong quantum-proof extractor in the high min-entropy regime. Specifically, we construct an extractor that uses a logarithmic seed length and extracts Ω(n)\Omega(n) bits from any source over \B^n, provided that the min-entropy of the source conditioned on the quantum adversary's state is at least (1β)n(1-\beta) n, for any \beta < \half.Comment: 14 page

    Extracting Mergers and Projections of Partitions

    Full text link
    We study the problem of extracting randomness from somewhere-random sources, and related combinatorial phenomena: partition analogues of Shearer's lemma on projections. A somewhere-random source is a tuple (X1,,Xt)(X_1, \ldots, X_t) of (possibly correlated) {0,1}n\{0,1\}^n-valued random variables XiX_i where for some unknown i[t]i \in [t], XiX_i is guaranteed to be uniformly distributed. An extractingextracting mergermerger is a seeded device that takes a somewhere-random source as input and outputs nearly uniform random bits. We study the seed-length needed for extracting mergers with constant tt and constant error. We show: \cdot Just like in the case of standard extractors, seedless extracting mergers with even just one output bit do not exist. \cdot Unlike the case of standard extractors, it isis possible to have extracting mergers that output a constant number of bits using only constant seed. Furthermore, a random choice of merger does not work for this purpose! \cdot Nevertheless, just like in the case of standard extractors, an extracting merger which gets most of the entropy out (namely, having Ω\Omega (n)(n) output bits) must have Ω\Omega (logn)(\log n) seed. This is the main technical result of our work, and is proved by a second-moment strengthening of the graph-theoretic approach of Radhakrishnan and Ta-Shma to extractors. In contrast, seed-length/output-length tradeoffs for condensing mergers (where the output is only required to have high min-entropy), can be fully explained by using standard condensers. Inspired by such considerations, we also formulate a new and basic class of problems in combinatorics: partition analogues of Shearer's lemma. We show basic results in this direction; in particular, we prove that in any partition of the 33-dimensional cube [0,1]3[0,1]^3 into two parts, one of the parts has an axis parallel 22-dimensional projection of area at least 3/43/4.Comment: Full version of the paper accepted to the International Conference on Randomization and Computation (RANDOM) 2023. 28 pages, 2 figure

    The method of multiplicities

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 93-98).Polynomials have played a fundamental role in the construction of objects with interesting combinatorial properties, such as error correcting codes, pseudorandom generators and randomness extractors. Somewhat strikingly, polynomials have also been found to be a powerful tool in the analysis of combinatorial parameters of objects that have some algebraic structure. This method of analysis has found applications in works on list-decoding of error correcting codes, constructions of randomness extractors, and in obtaining strong bounds for the size of Kakeya Sets. Remarkably, all these applications have relied on very simple and elementary properties of polynomials such as the sparsity of the zero sets of low degree polynomials. In this thesis we improve on several of the results mentioned above by a more powerful application of polynomials that takes into account the information contained in the derivatives of the polynomials. We call this technique the method of multiplicities. The derivative polynomials encode information about the high multiplicity zeroes of the original polynomial, and by taking into account this information, we are about to meaningfully reason about the zero sets of polynomials of degree much higher than the underlying field size. This freedom of using high degree polynomials allows us to obtain new and improved constructions of error correcting codes, and qualitatively improved analyses of Kakeya sets and randomness extractors.by Shubhangi Saraf.Ph.D

    Linear Hashing with \ell_\infty guarantees and two-sided Kakeya bounds

    Full text link
    We show that a randomly chosen linear map over a finite field gives a good hash function in the \ell_\infty sense. More concretely, consider a set SFqnS \subset \mathbb{F}_q^n and a randomly chosen linear map L:FqnFqtL : \mathbb{F}_q^n \to \mathbb{F}_q^t with qtq^t taken to be sufficiently smaller than S|S|. Let USU_S denote a random variable distributed uniformly on SS. Our main theorem shows that, with high probability over the choice of LL, the random variable L(US)L(U_S) is close to uniform in the \ell_\infty norm. In other words, every element in the range Fqt\mathbb{F}_q^t has about the same number of elements in SS mapped to it. This complements the widely-used Leftover Hash Lemma (LHL) which proves the analog statement under the statistical, or 1\ell_1, distance (for a richer class of functions) as well as prior work on the expected largest 'bucket size' in linear hash functions [ADMPT99]. Our proof leverages a connection between linear hashing and the finite field Kakeya problem and extends some of the tools developed in this area, in particular the polynomial method

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes
    corecore