64 research outputs found

    Two Structural Results for Low Degree Polynomials and Applications

    Get PDF
    In this paper, two structural results concerning low degree polynomials over finite fields are given. The first states that over any finite field F\mathbb{F}, for any polynomial ff on nn variables with degree dlog(n)/10d \le \log(n)/10, there exists a subspace of Fn\mathbb{F}^n with dimension Ω(dn1/(d1))\Omega(d \cdot n^{1/(d-1)}) on which ff is constant. This result is shown to be tight. Stated differently, a degree dd polynomial cannot compute an affine disperser for dimension smaller than Ω(dn1/(d1))\Omega(d \cdot n^{1/(d-1)}). Using a recursive argument, we obtain our second structural result, showing that any degree dd polynomial ff induces a partition of FnF^n to affine subspaces of dimension Ω(n1/(d1)!)\Omega(n^{1/(d-1)!}), such that ff is constant on each part. We extend both structural results to more than one polynomial. We further prove an analog of the first structural result to sparse polynomials (with no restriction on the degree) and to functions that are close to low degree polynomials. We also consider the algorithmic aspect of the two structural results. Our structural results have various applications, two of which are: * Dvir [CC 2012] introduced the notion of extractors for varieties, and gave explicit constructions of such extractors over large fields. We show that over any finite field, any affine extractor is also an extractor for varieties with related parameters. Our reduction also holds for dispersers, and we conclude that Shaltiel's affine disperser [FOCS 2011] is a disperser for varieties over F2F_2. * Ben-Sasson and Kopparty [SIAM J. C 2012] proved that any degree 3 affine disperser over a prime field is also an affine extractor with related parameters. Using our structural results, and based on the work of Kaufman and Lovett [FOCS 2008] and Haramaty and Shpilka [STOC 2010], we generalize this result to any constant degree

    Three-Source Extractors for Polylogarithmic Min-Entropy

    Full text link
    We continue the study of constructing explicit extractors for independent general weak random sources. The ultimate goal is to give a construction that matches what is given by the probabilistic method --- an extractor for two independent nn-bit weak random sources with min-entropy as small as logn+O(1)\log n+O(1). Previously, the best known result in the two-source case is an extractor by Bourgain \cite{Bourgain05}, which works for min-entropy 0.49n0.49n; and the best known result in the general case is an earlier work of the author \cite{Li13b}, which gives an extractor for a constant number of independent sources with min-entropy polylog(n)\mathsf{polylog(n)}. However, the constant in the construction of \cite{Li13b} depends on the hidden constant in the best known seeded extractor, and can be large; moreover the error in that construction is only 1/poly(n)1/\mathsf{poly(n)}. In this paper, we make two important improvements over the result in \cite{Li13b}. First, we construct an explicit extractor for \emph{three} independent sources on nn bits with min-entropy kpolylog(n)k \geq \mathsf{polylog(n)}. In fact, our extractor works for one independent source with poly-logarithmic min-entropy and another independent block source with two blocks each having poly-logarithmic min-entropy. Thus, our result is nearly optimal, and the next step would be to break the 0.49n0.49n barrier in two-source extractors. Second, we improve the error of the extractor from 1/poly(n)1/\mathsf{poly(n)} to 2kΩ(1)2^{-k^{\Omega(1)}}, which is almost optimal and crucial for cryptographic applications. Some of the techniques developed here may be of independent interests

    Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions

    Get PDF
    In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)). The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon. A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game

    Improved Extractors for Recognizable and Algebraic Sources

    Get PDF

    Extracting All the Randomness and Reducing the Error in Trevisan's Extractors

    Get PDF
    We give explicit constructions of extractors which work for a source of any min-entropy on strings of length n. These extractors can extract any constant fraction of the min-entropy using O(log2n) additional random bits, and can extract all the min-entropy using O(log3n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all min-entropies and extracts a constant fraction of the min-entropy. We then improve our second construction and show that we can reduce the entropy loss to 2log(1/epsilon)+O(1) bits, while still using O(log3n) truly random bits (where entropy loss is defined as [(source min-entropy)+ (# truly random bits used)- (# output bits)], and epsilon is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. Our extractors are obtained by observing that a weaker notion of "combinatorial design" suffices for the Nisan-Wigderson pseudorandom generator, which underlies the recent extractor of Trevisan. We give near-optimal constructions of such "weak designs" which achieve much better parameters than possible with the notion of designs used by Nisan-Wigderson and Trevisan. We also show how to improve our constructions (and Trevisan's construction) when the required statistical difference epsilon from the uniform distribution is relatively small. This improvement is obtained by using multilinear error-correcting codes over finite fields, rather than the arbitrary error-correcting codes used by Trevisan.Engineering and Applied Science

    Derandomizing Arthur-Merlin Games using Hitting Sets

    Get PDF
    We prove that AM (and hence Graph Nonisomorphism) is in NPif for some epsilon > 0, some language in NE intersection coNE requires nondeterministiccircuits of size 2^(epsilon n). This improves recent results of Arvindand K¨obler and of Klivans and Van Melkebeek who proved the sameconclusion, but under stronger hardness assumptions, namely, eitherthe existence of a language in NE intersection coNE which cannot be approximatedby nondeterministic circuits of size less than 2^(epsilon n) or the existenceof a language in NE intersection coNE which requires oracle circuits of size 2^(epsilon n)with oracle gates for SAT (satisfiability).The previous results on derandomizing AM were based on pseudorandomgenerators. In contrast, our approach is based on a strengtheningof Andreev, Clementi and Rolim's hitting set approach to derandomization.As a spin-off, we show that this approach is strong enoughto give an easy (if the existence of explicit dispersers can be assumedknown) proof of the following implication: For some epsilon > 0, if there isa language in E which requires nondeterministic circuits of size 2^(epsilon n),then P=BPP. This differs from Impagliazzo and Wigderson's theorem"only" by replacing deterministic circuits with nondeterministicones
    corecore