59 research outputs found

    Two Structural Results for Low Degree Polynomials and Applications

    Get PDF
    In this paper, two structural results concerning low degree polynomials over finite fields are given. The first states that over any finite field F\mathbb{F}, for any polynomial ff on nn variables with degree dlog(n)/10d \le \log(n)/10, there exists a subspace of Fn\mathbb{F}^n with dimension Ω(dn1/(d1))\Omega(d \cdot n^{1/(d-1)}) on which ff is constant. This result is shown to be tight. Stated differently, a degree dd polynomial cannot compute an affine disperser for dimension smaller than Ω(dn1/(d1))\Omega(d \cdot n^{1/(d-1)}). Using a recursive argument, we obtain our second structural result, showing that any degree dd polynomial ff induces a partition of FnF^n to affine subspaces of dimension Ω(n1/(d1)!)\Omega(n^{1/(d-1)!}), such that ff is constant on each part. We extend both structural results to more than one polynomial. We further prove an analog of the first structural result to sparse polynomials (with no restriction on the degree) and to functions that are close to low degree polynomials. We also consider the algorithmic aspect of the two structural results. Our structural results have various applications, two of which are: * Dvir [CC 2012] introduced the notion of extractors for varieties, and gave explicit constructions of such extractors over large fields. We show that over any finite field, any affine extractor is also an extractor for varieties with related parameters. Our reduction also holds for dispersers, and we conclude that Shaltiel's affine disperser [FOCS 2011] is a disperser for varieties over F2F_2. * Ben-Sasson and Kopparty [SIAM J. C 2012] proved that any degree 3 affine disperser over a prime field is also an affine extractor with related parameters. Using our structural results, and based on the work of Kaufman and Lovett [FOCS 2008] and Haramaty and Shpilka [STOC 2010], we generalize this result to any constant degree

    Affine extractors over large fields with exponential error

    Full text link
    We describe a construction of explicit affine extractors over large finite fields with exponentially small error and linear output length. Our construction relies on a deep theorem of Deligne giving tight estimates for exponential sums over smooth varieties in high dimensions.Comment: To appear in Comput. Comple

    Extractors for Polynomial Sources over F2\mathbb{F}_2

    Full text link
    We explicitly construct the first nontrivial extractors for degree d2d \ge 2 polynomial sources over F2n\mathbb{F}_2^n. Our extractor requires min-entropy knlogn(dloglogn)d/2k\geq n - \frac{\sqrt{\log n}}{(d\log \log n)^{d/2}}. Previously, no constructions were known, even for min-entropy kn1k\geq n-1. A key ingredient in our construction is an input reduction lemma, which allows us to assume that any polynomial source with min-entropy kk can be generated by O(k)O(k) uniformly random bits. We also provide strong formal evidence that polynomial sources are unusually challenging to extract from, by showing that even our most powerful general purpose extractors cannot handle polynomial sources with min-entropy below kno(n)k\geq n-o(n). In more detail, we show that sumset extractors cannot even disperse from degree 22 polynomial sources with min-entropy knO(n/loglogn)k\geq n-O(n/\log\log n). In fact, this impossibility result even holds for a more specialized family of sources that we introduce, called polynomial non-oblivious bit-fixing (NOBF) sources. Polynomial NOBF sources are a natural new family of algebraic sources that lie at the intersection of polynomial and variety sources, and thus our impossibility result applies to both of these classical settings. This is especially surprising, since we do have variety extractors that slightly beat this barrier - implying that sumset extractors are not a panacea in the world of seedless extraction

    A composition theorem for parity kill number

    Full text link
    In this work, we study the parity complexity measures Cmin[f]{\mathsf{C}^{\oplus}_{\min}}[f] and DT[f]{\mathsf{DT^{\oplus}}}[f]. Cmin[f]{\mathsf{C}^{\oplus}_{\min}}[f] is the \emph{parity kill number} of ff, the fewest number of parities on the input variables one has to fix in order to "kill" ff, i.e. to make it constant. DT[f]{\mathsf{DT^{\oplus}}}[f] is the depth of the shortest \emph{parity decision tree} which computes ff. These complexity measures have in recent years become increasingly important in the fields of communication complexity \cite{ZS09, MO09, ZS10, TWXZ13} and pseudorandomness \cite{BK12, Sha11, CT13}. Our main result is a composition theorem for Cmin{\mathsf{C}^{\oplus}_{\min}}. The kk-th power of ff, denoted fkf^{\circ k}, is the function which results from composing ff with itself kk times. We prove that if ff is not a parity function, then Cmin[fk]Ω(Cmin[f]k).{\mathsf{C}^{\oplus}_{\min}}[f^{\circ k}] \geq \Omega({\mathsf{C}_{\min}}[f]^{k}). In other words, the parity kill number of ff is essentially supermultiplicative in the \emph{normal} kill number of ff (also known as the minimum certificate complexity). As an application of our composition theorem, we show lower bounds on the parity complexity measures of Sortk\mathsf{Sort}^{\circ k} and HIk\mathsf{HI}^{\circ k}. Here Sort\mathsf{Sort} is the sort function due to Ambainis \cite{Amb06}, and HI\mathsf{HI} is Kushilevitz's hemi-icosahedron function \cite{NW95}. In doing so, we disprove a conjecture of Montanaro and Osborne \cite{MO09} which had applications to communication complexity and computational learning theory. In addition, we give new lower bounds for conjectures of \cite{MO09,ZS10} and \cite{TWXZ13}

    Two-Source Dispersers for Polylogarithmic Entropy and Improved Ramsey Graphs

    Full text link
    In his 1947 paper that inaugurated the probabilistic method, Erd\H{o}s proved the existence of 2logn2\log{n}-Ramsey graphs on nn vertices. Matching Erd\H{o}s' result with a constructive proof is a central problem in combinatorics, that has gained a significant attention in the literature. The state of the art result was obtained in the celebrated paper by Barak, Rao, Shaltiel and Wigderson [Ann. Math'12], who constructed a 22(loglogn)1α2^{2^{(\log\log{n})^{1-\alpha}}}-Ramsey graph, for some small universal constant α>0\alpha > 0. In this work, we significantly improve the result of Barak~\etal and construct 2(loglogn)c2^{(\log\log{n})^c}-Ramsey graphs, for some universal constant cc. In the language of theoretical computer science, our work resolves the problem of explicitly constructing two-source dispersers for polylogarithmic entropy

    Constructive Relationships Between Algebraic Thickness and Normality

    Full text link
    We study the relationship between two measures of Boolean functions; \emph{algebraic thickness} and \emph{normality}. For a function ff, the algebraic thickness is a variant of the \emph{sparsity}, the number of nonzero coefficients in the unique GF(2) polynomial representing ff, and the normality is the largest dimension of an affine subspace on which ff is constant. We show that for 0<ϵ<20 < \epsilon<2, any function with algebraic thickness n3ϵn^{3-\epsilon} is constant on some affine subspace of dimension Ω(nϵ2)\Omega\left(n^{\frac{\epsilon}{2}}\right). Furthermore, we give an algorithm for finding such a subspace. We show that this is at most a factor of Θ(n)\Theta(\sqrt{n}) from the best guaranteed, and when restricted to the technique used, is at most a factor of Θ(logn)\Theta(\sqrt{\log n}) from the best guaranteed. We also show that a concrete function, majority, has algebraic thickness Ω(2n1/6)\Omega\left(2^{n^{1/6}}\right).Comment: Final version published in FCT'201

    Three-Source Extractors for Polylogarithmic Min-Entropy

    Full text link
    We continue the study of constructing explicit extractors for independent general weak random sources. The ultimate goal is to give a construction that matches what is given by the probabilistic method --- an extractor for two independent nn-bit weak random sources with min-entropy as small as logn+O(1)\log n+O(1). Previously, the best known result in the two-source case is an extractor by Bourgain \cite{Bourgain05}, which works for min-entropy 0.49n0.49n; and the best known result in the general case is an earlier work of the author \cite{Li13b}, which gives an extractor for a constant number of independent sources with min-entropy polylog(n)\mathsf{polylog(n)}. However, the constant in the construction of \cite{Li13b} depends on the hidden constant in the best known seeded extractor, and can be large; moreover the error in that construction is only 1/poly(n)1/\mathsf{poly(n)}. In this paper, we make two important improvements over the result in \cite{Li13b}. First, we construct an explicit extractor for \emph{three} independent sources on nn bits with min-entropy kpolylog(n)k \geq \mathsf{polylog(n)}. In fact, our extractor works for one independent source with poly-logarithmic min-entropy and another independent block source with two blocks each having poly-logarithmic min-entropy. Thus, our result is nearly optimal, and the next step would be to break the 0.49n0.49n barrier in two-source extractors. Second, we improve the error of the extractor from 1/poly(n)1/\mathsf{poly(n)} to 2kΩ(1)2^{-k^{\Omega(1)}}, which is almost optimal and crucial for cryptographic applications. Some of the techniques developed here may be of independent interests

    Deterministic Extractors for Additive Sources

    Full text link
    We propose a new model of a weakly random source that admits randomness extraction. Our model of additive sources includes such natural sources as uniform distributions on arithmetic progressions (APs), generalized arithmetic progressions (GAPs), and Bohr sets, each of which generalizes affine sources. We give an explicit extractor for additive sources with linear min-entropy over both Zp\mathbb{Z}_p and Zpn\mathbb{Z}_p^n, for large prime pp, although our results over Zpn\mathbb{Z}_p^n require that the source further satisfy a list-decodability condition. As a corollary, we obtain explicit extractors for APs, GAPs, and Bohr sources with linear min-entropy, although again our results over Zpn\mathbb{Z}_p^n require the list-decodability condition. We further explore special cases of additive sources. We improve previous constructions of line sources (affine sources of dimension 1), requiring a field of size linear in nn, rather than Ω(n2)\Omega(n^2) by Gabizon and Raz. This beats the non-explicit bound of Θ(nlogn)\Theta(n \log n) obtained by the probabilistic method. We then generalize this result to APs and GAPs

    Algebraic and Combinatorial Methods in Computational Complexity

    Get PDF
    At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings
    corecore