59 research outputs found
Two Structural Results for Low Degree Polynomials and Applications
In this paper, two structural results concerning low degree polynomials over
finite fields are given. The first states that over any finite field
, for any polynomial on variables with degree , there exists a subspace of with dimension on which is constant. This result is shown to be tight.
Stated differently, a degree polynomial cannot compute an affine disperser
for dimension smaller than . Using a recursive
argument, we obtain our second structural result, showing that any degree
polynomial induces a partition of to affine subspaces of dimension
, such that is constant on each part.
We extend both structural results to more than one polynomial. We further
prove an analog of the first structural result to sparse polynomials (with no
restriction on the degree) and to functions that are close to low degree
polynomials. We also consider the algorithmic aspect of the two structural
results.
Our structural results have various applications, two of which are:
* Dvir [CC 2012] introduced the notion of extractors for varieties, and gave
explicit constructions of such extractors over large fields. We show that over
any finite field, any affine extractor is also an extractor for varieties with
related parameters. Our reduction also holds for dispersers, and we conclude
that Shaltiel's affine disperser [FOCS 2011] is a disperser for varieties over
.
* Ben-Sasson and Kopparty [SIAM J. C 2012] proved that any degree 3 affine
disperser over a prime field is also an affine extractor with related
parameters. Using our structural results, and based on the work of Kaufman and
Lovett [FOCS 2008] and Haramaty and Shpilka [STOC 2010], we generalize this
result to any constant degree
Affine extractors over large fields with exponential error
We describe a construction of explicit affine extractors over large finite
fields with exponentially small error and linear output length. Our
construction relies on a deep theorem of Deligne giving tight estimates for
exponential sums over smooth varieties in high dimensions.Comment: To appear in Comput. Comple
Extractors for Polynomial Sources over
We explicitly construct the first nontrivial extractors for degree
polynomial sources over . Our extractor requires min-entropy
. Previously, no
constructions were known, even for min-entropy . A key ingredient in
our construction is an input reduction lemma, which allows us to assume that
any polynomial source with min-entropy can be generated by uniformly
random bits.
We also provide strong formal evidence that polynomial sources are unusually
challenging to extract from, by showing that even our most powerful general
purpose extractors cannot handle polynomial sources with min-entropy below
. In more detail, we show that sumset extractors cannot even
disperse from degree polynomial sources with min-entropy . In fact, this impossibility result even holds for a more
specialized family of sources that we introduce, called polynomial
non-oblivious bit-fixing (NOBF) sources. Polynomial NOBF sources are a natural
new family of algebraic sources that lie at the intersection of polynomial and
variety sources, and thus our impossibility result applies to both of these
classical settings. This is especially surprising, since we do have variety
extractors that slightly beat this barrier - implying that sumset extractors
are not a panacea in the world of seedless extraction
A composition theorem for parity kill number
In this work, we study the parity complexity measures
and .
is the \emph{parity kill number} of , the
fewest number of parities on the input variables one has to fix in order to
"kill" , i.e. to make it constant. is the depth
of the shortest \emph{parity decision tree} which computes . These
complexity measures have in recent years become increasingly important in the
fields of communication complexity \cite{ZS09, MO09, ZS10, TWXZ13} and
pseudorandomness \cite{BK12, Sha11, CT13}.
Our main result is a composition theorem for .
The -th power of , denoted , is the function which results
from composing with itself times. We prove that if is not a parity
function, then In other words, the parity kill number of
is essentially supermultiplicative in the \emph{normal} kill number of
(also known as the minimum certificate complexity).
As an application of our composition theorem, we show lower bounds on the
parity complexity measures of and . Here is the sort function due to Ambainis \cite{Amb06},
and is Kushilevitz's hemi-icosahedron function \cite{NW95}. In
doing so, we disprove a conjecture of Montanaro and Osborne \cite{MO09} which
had applications to communication complexity and computational learning theory.
In addition, we give new lower bounds for conjectures of \cite{MO09,ZS10} and
\cite{TWXZ13}
Two-Source Dispersers for Polylogarithmic Entropy and Improved Ramsey Graphs
In his 1947 paper that inaugurated the probabilistic method, Erd\H{o}s proved
the existence of -Ramsey graphs on vertices. Matching Erd\H{o}s'
result with a constructive proof is a central problem in combinatorics, that
has gained a significant attention in the literature. The state of the art
result was obtained in the celebrated paper by Barak, Rao, Shaltiel and
Wigderson [Ann. Math'12], who constructed a
-Ramsey graph, for some small universal
constant .
In this work, we significantly improve the result of Barak~\etal and
construct -Ramsey graphs, for some universal constant .
In the language of theoretical computer science, our work resolves the problem
of explicitly constructing two-source dispersers for polylogarithmic entropy
Constructive Relationships Between Algebraic Thickness and Normality
We study the relationship between two measures of Boolean functions;
\emph{algebraic thickness} and \emph{normality}. For a function , the
algebraic thickness is a variant of the \emph{sparsity}, the number of nonzero
coefficients in the unique GF(2) polynomial representing , and the normality
is the largest dimension of an affine subspace on which is constant. We
show that for , any function with algebraic thickness
is constant on some affine subspace of dimension
. Furthermore, we give an algorithm
for finding such a subspace. We show that this is at most a factor of
from the best guaranteed, and when restricted to the
technique used, is at most a factor of from the best
guaranteed. We also show that a concrete function, majority, has algebraic
thickness .Comment: Final version published in FCT'201
Three-Source Extractors for Polylogarithmic Min-Entropy
We continue the study of constructing explicit extractors for independent
general weak random sources. The ultimate goal is to give a construction that
matches what is given by the probabilistic method --- an extractor for two
independent -bit weak random sources with min-entropy as small as . Previously, the best known result in the two-source case is an
extractor by Bourgain \cite{Bourgain05}, which works for min-entropy ;
and the best known result in the general case is an earlier work of the author
\cite{Li13b}, which gives an extractor for a constant number of independent
sources with min-entropy . However, the constant in the
construction of \cite{Li13b} depends on the hidden constant in the best known
seeded extractor, and can be large; moreover the error in that construction is
only .
In this paper, we make two important improvements over the result in
\cite{Li13b}. First, we construct an explicit extractor for \emph{three}
independent sources on bits with min-entropy .
In fact, our extractor works for one independent source with poly-logarithmic
min-entropy and another independent block source with two blocks each having
poly-logarithmic min-entropy. Thus, our result is nearly optimal, and the next
step would be to break the barrier in two-source extractors. Second, we
improve the error of the extractor from to
, which is almost optimal and crucial for cryptographic
applications. Some of the techniques developed here may be of independent
interests
Deterministic Extractors for Additive Sources
We propose a new model of a weakly random source that admits randomness
extraction. Our model of additive sources includes such natural sources as
uniform distributions on arithmetic progressions (APs), generalized arithmetic
progressions (GAPs), and Bohr sets, each of which generalizes affine sources.
We give an explicit extractor for additive sources with linear min-entropy over
both and , for large prime , although our
results over require that the source further satisfy a
list-decodability condition. As a corollary, we obtain explicit extractors for
APs, GAPs, and Bohr sources with linear min-entropy, although again our results
over require the list-decodability condition. We further
explore special cases of additive sources. We improve previous constructions of
line sources (affine sources of dimension 1), requiring a field of size linear
in , rather than by Gabizon and Raz. This beats the
non-explicit bound of obtained by the probabilistic method.
We then generalize this result to APs and GAPs
Algebraic and Combinatorial Methods in Computational Complexity
At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings
- …