10 research outputs found
On active and passive testing
Given a property of Boolean functions, what is the minimum number of queries
required to determine with high probability if an input function satisfies this
property or is "far" from satisfying it? This is a fundamental question in
Property Testing, where traditionally the testing algorithm is allowed to pick
its queries among the entire set of inputs. Balcan, Blais, Blum and Yang have
recently suggested to restrict the tester to take its queries from a smaller
random subset of polynomial size of the inputs. This model is called active
testing, and in the extreme case when the size of the set we can query from is
exactly the number of queries performed it is known as passive testing.
We prove that passive or active testing of k-linear functions (that is, sums
of k variables among n over Z_2) requires Theta(k*log n) queries, assuming k is
not too large. This extends the case k=1, (that is, dictator functions),
analyzed by Balcan et. al.
We also consider other classes of functions including low degree polynomials,
juntas, and partially symmetric functions. Our methods combine algebraic,
combinatorial, and probabilistic techniques, including the Talagrand
concentration inequality and the Erdos--Rado theorem on Delta-systems.Comment: 16 page
Extractor-Based Time-Space Lower Bounds for Learning
A matrix corresponds to the following
learning problem: An unknown element is chosen uniformly at random. A
learner tries to learn from a stream of samples, , where for every , is chosen uniformly at random and
.
Assume that are such that any submatrix of of at least
rows and at least columns, has a bias
of at most . We show that any learning algorithm for the learning
problem corresponding to requires either a memory of size at least
, or at least samples. The
result holds even if the learner has an exponentially small success probability
(of ).
In particular, this shows that for a large class of learning problems, any
learning algorithm requires either a memory of size at least or an exponential number of samples, achieving a
tight lower bound on the size
of the memory, rather than a bound of obtained in previous works [R17,MM17b].
Moreover, our result implies all previous memory-samples lower bounds, as
well as a number of new applications.
Our proof builds on [R17] that gave a general technique for proving
memory-samples lower bounds
Two Structural Results for Low Degree Polynomials and Applications
In this paper, two structural results concerning low degree polynomials over
finite fields are given. The first states that over any finite field
, for any polynomial on variables with degree , there exists a subspace of with dimension on which is constant. This result is shown to be tight.
Stated differently, a degree polynomial cannot compute an affine disperser
for dimension smaller than . Using a recursive
argument, we obtain our second structural result, showing that any degree
polynomial induces a partition of to affine subspaces of dimension
, such that is constant on each part.
We extend both structural results to more than one polynomial. We further
prove an analog of the first structural result to sparse polynomials (with no
restriction on the degree) and to functions that are close to low degree
polynomials. We also consider the algorithmic aspect of the two structural
results.
Our structural results have various applications, two of which are:
* Dvir [CC 2012] introduced the notion of extractors for varieties, and gave
explicit constructions of such extractors over large fields. We show that over
any finite field, any affine extractor is also an extractor for varieties with
related parameters. Our reduction also holds for dispersers, and we conclude
that Shaltiel's affine disperser [FOCS 2011] is a disperser for varieties over
.
* Ben-Sasson and Kopparty [SIAM J. C 2012] proved that any degree 3 affine
disperser over a prime field is also an affine extractor with related
parameters. Using our structural results, and based on the work of Kaufman and
Lovett [FOCS 2008] and Haramaty and Shpilka [STOC 2010], we generalize this
result to any constant degree
Electronic Colloquium on Computational Complexity, Report No. 80 (2008) Random low degree polynomials are hard to approximate
We study the problem of how well a typical multivariate polynomial can be approximated by lower degree polynomials over F2. We prove that, with very high probability, a random degree d polynomial has only an exponentially small correlation with all polynomials of degree d − 1, for all degrees d up to Θ(n). That is, a random degree d polynomial does not admit good approximations of lesser degree. In order to prove this, we prove far tail estimates on the distribution of the bias of a random low degree polynomial. As part of the proof, we also prove tight lower bounds on the dimension of truncated Reed–Muller codes.