10 research outputs found

    On active and passive testing

    Full text link
    Given a property of Boolean functions, what is the minimum number of queries required to determine with high probability if an input function satisfies this property or is "far" from satisfying it? This is a fundamental question in Property Testing, where traditionally the testing algorithm is allowed to pick its queries among the entire set of inputs. Balcan, Blais, Blum and Yang have recently suggested to restrict the tester to take its queries from a smaller random subset of polynomial size of the inputs. This model is called active testing, and in the extreme case when the size of the set we can query from is exactly the number of queries performed it is known as passive testing. We prove that passive or active testing of k-linear functions (that is, sums of k variables among n over Z_2) requires Theta(k*log n) queries, assuming k is not too large. This extends the case k=1, (that is, dictator functions), analyzed by Balcan et. al. We also consider other classes of functions including low degree polynomials, juntas, and partially symmetric functions. Our methods combine algebraic, combinatorial, and probabilistic techniques, including the Talagrand concentration inequality and the Erdos--Rado theorem on Delta-systems.Comment: 16 page

    Extractor-Based Time-Space Lower Bounds for Learning

    Full text link
    A matrix M:A×X{1,1}M: A \times X \rightarrow \{-1,1\} corresponds to the following learning problem: An unknown element xXx \in X is chosen uniformly at random. A learner tries to learn xx from a stream of samples, (a1,b1),(a2,b2)(a_1, b_1), (a_2, b_2) \ldots, where for every ii, aiAa_i \in A is chosen uniformly at random and bi=M(ai,x)b_i = M(a_i,x). Assume that k,,rk,\ell, r are such that any submatrix of MM of at least 2kA2^{-k} \cdot |A| rows and at least 2X2^{-\ell} \cdot |X| columns, has a bias of at most 2r2^{-r}. We show that any learning algorithm for the learning problem corresponding to MM requires either a memory of size at least Ω(k)\Omega\left(k \cdot \ell \right), or at least 2Ω(r)2^{\Omega(r)} samples. The result holds even if the learner has an exponentially small success probability (of 2Ω(r)2^{-\Omega(r)}). In particular, this shows that for a large class of learning problems, any learning algorithm requires either a memory of size at least Ω((logX)(logA))\Omega\left((\log |X|) \cdot (\log |A|)\right) or an exponential number of samples, achieving a tight Ω((logX)(logA))\Omega\left((\log |X|) \cdot (\log |A|)\right) lower bound on the size of the memory, rather than a bound of Ω(min{(logX)2,(logA)2})\Omega\left(\min\left\{(\log |X|)^2,(\log |A|)^2\right\}\right) obtained in previous works [R17,MM17b]. Moreover, our result implies all previous memory-samples lower bounds, as well as a number of new applications. Our proof builds on [R17] that gave a general technique for proving memory-samples lower bounds

    Two Structural Results for Low Degree Polynomials and Applications

    Get PDF
    In this paper, two structural results concerning low degree polynomials over finite fields are given. The first states that over any finite field F\mathbb{F}, for any polynomial ff on nn variables with degree dlog(n)/10d \le \log(n)/10, there exists a subspace of Fn\mathbb{F}^n with dimension Ω(dn1/(d1))\Omega(d \cdot n^{1/(d-1)}) on which ff is constant. This result is shown to be tight. Stated differently, a degree dd polynomial cannot compute an affine disperser for dimension smaller than Ω(dn1/(d1))\Omega(d \cdot n^{1/(d-1)}). Using a recursive argument, we obtain our second structural result, showing that any degree dd polynomial ff induces a partition of FnF^n to affine subspaces of dimension Ω(n1/(d1)!)\Omega(n^{1/(d-1)!}), such that ff is constant on each part. We extend both structural results to more than one polynomial. We further prove an analog of the first structural result to sparse polynomials (with no restriction on the degree) and to functions that are close to low degree polynomials. We also consider the algorithmic aspect of the two structural results. Our structural results have various applications, two of which are: * Dvir [CC 2012] introduced the notion of extractors for varieties, and gave explicit constructions of such extractors over large fields. We show that over any finite field, any affine extractor is also an extractor for varieties with related parameters. Our reduction also holds for dispersers, and we conclude that Shaltiel's affine disperser [FOCS 2011] is a disperser for varieties over F2F_2. * Ben-Sasson and Kopparty [SIAM J. C 2012] proved that any degree 3 affine disperser over a prime field is also an affine extractor with related parameters. Using our structural results, and based on the work of Kaufman and Lovett [FOCS 2008] and Haramaty and Shpilka [STOC 2010], we generalize this result to any constant degree

    Electronic Colloquium on Computational Complexity, Report No. 80 (2008) Random low degree polynomials are hard to approximate

    No full text
    We study the problem of how well a typical multivariate polynomial can be approximated by lower degree polynomials over F2. We prove that, with very high probability, a random degree d polynomial has only an exponentially small correlation with all polynomials of degree d − 1, for all degrees d up to Θ(n). That is, a random degree d polynomial does not admit good approximations of lesser degree. In order to prove this, we prove far tail estimates on the distribution of the bias of a random low degree polynomial. As part of the proof, we also prove tight lower bounds on the dimension of truncated Reed–Muller codes.
    corecore