51 research outputs found
Two Structural Results for Low Degree Polynomials and Applications
In this paper, two structural results concerning low degree polynomials over
finite fields are given. The first states that over any finite field
, for any polynomial on variables with degree , there exists a subspace of with dimension on which is constant. This result is shown to be tight.
Stated differently, a degree polynomial cannot compute an affine disperser
for dimension smaller than . Using a recursive
argument, we obtain our second structural result, showing that any degree
polynomial induces a partition of to affine subspaces of dimension
, such that is constant on each part.
We extend both structural results to more than one polynomial. We further
prove an analog of the first structural result to sparse polynomials (with no
restriction on the degree) and to functions that are close to low degree
polynomials. We also consider the algorithmic aspect of the two structural
results.
Our structural results have various applications, two of which are:
* Dvir [CC 2012] introduced the notion of extractors for varieties, and gave
explicit constructions of such extractors over large fields. We show that over
any finite field, any affine extractor is also an extractor for varieties with
related parameters. Our reduction also holds for dispersers, and we conclude
that Shaltiel's affine disperser [FOCS 2011] is a disperser for varieties over
.
* Ben-Sasson and Kopparty [SIAM J. C 2012] proved that any degree 3 affine
disperser over a prime field is also an affine extractor with related
parameters. Using our structural results, and based on the work of Kaufman and
Lovett [FOCS 2008] and Haramaty and Shpilka [STOC 2010], we generalize this
result to any constant degree
On the Sensitivity Conjecture
The sensitivity of a Boolean function f:{0,1}^n -> {0,1} is the maximal number of neighbors a point in the Boolean hypercube has with different f-value. Roughly speaking, the block sensitivity allows to flip a set of bits (called a block) rather than just one bit, in order to change the value of f. The sensitivity conjecture, posed by Nisan and Szegedy (CC, 1994), states that the block sensitivity, bs(f), is at most polynomial in the sensitivity, s(f), for any Boolean function f. A positive answer to the conjecture will have many consequences, as the block sensitivity is polynomially related to many other complexity measures such as the certificate complexity, the decision tree complexity and the degree. The conjecture is far from being understood, as there is an exponential gap between the known upper and lower bounds relating bs(f) and s(f).
We continue a line of work started by Kenyon and Kutin (Inf. Comput., 2004), studying the l-block sensitivity, bs_l(f), where l bounds the size of sensitive blocks. While for bs_2(f) the picture is well understood with almost matching upper and lower bounds, for bs_3(f) it is not. We show that any development in understanding bs_3(f) in terms of s(f) will have great implications on the original question. Namely, we show that either bs(f) is at most sub-exponential in s(f) (which improves the state of the art upper bounds) or that bs_3(f) >= s(f){3-epsilon} for some Boolean functions (which improves the state of the art separations).
We generalize the question of bs(f) versus s(f) to bounded functions f:{0,1}^n -> [0,1] and show an analog result to that of Kenyon and Kutin: bs_l(f) = O(s(f))^l. Surprisingly, in this case, the bounds are close to being tight. In particular, we construct a bounded function f:{0,1}^n -> [0, 1] with bs(f) n/log(n) and s(f) = O(log(n)), a clear counterexample to the sensitivity conjecture for bounded functions.
Finally, we give a new super-quadratic separation between sensitivity and decision tree complexity by constructing Boolean functions with DT(f) >= s(f)^{2.115}. Prior to this work, only quadratic separations, DT(f) = s(f)^2, were known
Cubic Formula Size Lower Bounds Based on Compositions with Majority
We define new functions based on the Andreev function and prove that they require n^{3}/polylog(n) formula size to compute. The functions we consider are generalizations of the Andreev function using compositions with the majority function. Our arguments apply to composing a hard function with any function that agrees with the majority function (or its negation) on the middle slices of the Boolean cube, as well as iterated compositions of such functions. As a consequence, we obtain n^{3}/polylog(n) lower bounds on the (non-monotone) formula size of an explicit monotone function by combining the monotone address function with the majority function
Tight Bounds on the Fourier Spectrum of AC0
We show that AC^0 circuits on n variables with depth d and size m have at most 2^{-Omega(k/log^{d-1} m)} of their Fourier mass at level k or above. Our proof builds on a previous result by Hastad (SICOMP, 2014) who proved this bound for the special case k=n. Our result improves the seminal result of Linial, Mansour and Nisan (JACM, 1993) and is tight up to the constants hidden in the Omega notation.
As an application, we improve Braverman\u27s celebrated result (JACM, 2010). Braverman showed that any r(m,d,epsilon)-wise independent distribution epsilon-fools AC^0 circuits of size m and depth d, for r(m,d,epsilon) = O(log(m/epsilon))^{2d^2+7d+3}. Our improved bounds on the Fourier tails of AC^0 circuits allows us to improve this estimate to r(m,d,epsilon) = O(log(m/epsilon))^{3d+3}. In contrast, an example by Mansour (appearing in Luby and Velickovic\u27s paper - Algorithmica, 1996) shows that there is a log^{d-1}(m)log(1/epsilon)-wise independent distribution that does not epsilon-fool AC^0 circuits of size m and depth d. Hence, our result is tight up to the factor in the exponent
Extractor-Based Time-Space Lower Bounds for Learning
A matrix corresponds to the following
learning problem: An unknown element is chosen uniformly at random. A
learner tries to learn from a stream of samples, , where for every , is chosen uniformly at random and
.
Assume that are such that any submatrix of of at least
rows and at least columns, has a bias
of at most . We show that any learning algorithm for the learning
problem corresponding to requires either a memory of size at least
, or at least samples. The
result holds even if the learner has an exponentially small success probability
(of ).
In particular, this shows that for a large class of learning problems, any
learning algorithm requires either a memory of size at least or an exponential number of samples, achieving a
tight lower bound on the size
of the memory, rather than a bound of obtained in previous works [R17,MM17b].
Moreover, our result implies all previous memory-samples lower bounds, as
well as a number of new applications.
Our proof builds on [R17] that gave a general technique for proving
memory-samples lower bounds
Low-Sensitivity Functions from Unambiguous Certificates
We provide new query complexity separations against sensitivity for total
Boolean functions: a power separation between deterministic (and even
randomized or quantum) query complexity and sensitivity, and a power
separation between certificate complexity and sensitivity. We get these
separations by using a new connection between sensitivity and a seemingly
unrelated measure called one-sided unambiguous certificate complexity
(). We also show that is lower-bounded by fractional block
sensitivity, which means we cannot use these techniques to get a
super-quadratic separation between and . We also provide a
quadratic separation between the tree-sensitivity and decision tree complexity
of Boolean functions, disproving a conjecture of Gopalan, Servedio, Tal, and
Wigderson (CCC 2016).
Along the way, we give a power separation between certificate
complexity and one-sided unambiguous certificate complexity, improving the
power separation due to G\"o\"os (FOCS 2015). As a consequence, we
obtain an improved lower-bound on the
co-nondeterministic communication complexity of the Clique vs. Independent Set
problem.Comment: 25 pages. This version expands the results and adds Pooya Hatami and
Avishay Tal as author
Pseudorandom Generators for Low Sensitivity Functions
A Boolean function is said to have maximal sensitivity s if s is the largest number of Hamming neighbors of a point which differ from it in function value. We initiate the study of pseudorandom generators fooling low-sensitivity functions as an intermediate step towards settling the sensitivity conjecture. We construct a pseudorandom generator with seed-length 2^{O(s^{1/2})} log(n) that fools Boolean functions on n variables with maximal sensitivity at most s. Prior to our work, the (implicitly) best pseudorandom generators for this class of functions required seed-length 2^{O(s)} log(n)
Recommended from our members
On the Computational Power of Radio Channels
Radio networks can be a challenging platform for which to develop distributed algorithms, because the network nodes must contend for a shared channel. In some cases, though, the shared medium is an advantage rather than a disadvantage: for example, many radio network algorithms cleverly use the shared channel to approximate the degree of a node, or estimate the contention. In this paper we ask how far the inherent power of a shared radio channel goes, and whether it can efficiently compute "classicaly hard" functions such as Majority, Approximate Sum, and Parity.
Using techniques from circuit complexity, we show that in many cases, the answer is "no". We show that simple radio channels, such as the beeping model or the channel with collision-detection, can be approximated by a low-degree polynomial, which makes them subject to known lower bounds on functions such as Parity and Majority; we obtain round lower bounds of the form Omega(n^{delta}) on these functions, for delta in (0,1). Next, we use the technique of random restrictions, used to prove AC^0 lower bounds, to prove a tight lower bound of Omega(1/epsilon^2) on computing a (1 +/- epsilon)-approximation to the sum of the nodes\u27 inputs. Our techniques are general, and apply to many types of radio channels studied in the literature
- …