8 research outputs found

    A Nearly Optimal Lower Bound on the Approximate Degree of AC0^0

    Full text link
    The approximate degree of a Boolean function f ⁣:{1,1}n{1,1}f \colon \{-1, 1\}^n \rightarrow \{-1, 1\} is the least degree of a real polynomial that approximates ff pointwise to error at most 1/31/3. We introduce a generic method for increasing the approximate degree of a given function, while preserving its computability by constant-depth circuits. Specifically, we show how to transform any Boolean function ff with approximate degree dd into a function FF on O(npolylog(n))O(n \cdot \operatorname{polylog}(n)) variables with approximate degree at least D=Ω(n1/3d2/3)D = \Omega(n^{1/3} \cdot d^{2/3}). In particular, if d=n1Ω(1)d= n^{1-\Omega(1)}, then DD is polynomially larger than dd. Moreover, if ff is computed by a polynomial-size Boolean circuit of constant depth, then so is FF. By recursively applying our transformation, for any constant δ>0\delta > 0 we exhibit an AC0^0 function of approximate degree Ω(n1δ)\Omega(n^{1-\delta}). This improves over the best previous lower bound of Ω(n2/3)\Omega(n^{2/3}) due to Aaronson and Shi (J. ACM 2004), and nearly matches the trivial upper bound of nn that holds for any function. Our lower bounds also apply to (quasipolynomial-size) DNFs of polylogarithmic width. We describe several applications of these results. We give: * For any constant δ>0\delta > 0, an Ω(n1δ)\Omega(n^{1-\delta}) lower bound on the quantum communication complexity of a function in AC0^0. * A Boolean function ff with approximate degree at least C(f)2o(1)C(f)^{2-o(1)}, where C(f)C(f) is the certificate complexity of ff. This separation is optimal up to the o(1)o(1) term in the exponent. * Improved secret sharing schemes with reconstruction procedures in AC0^0.Comment: 40 pages, 1 figur

    Lower Bounds for the Approximate Degree of Block-Composed Functions

    Get PDF
    We describe a new hardness amplification result for point-wise approximation of Boolean functions by low-degree polynomials. Specifically, for any function f on N bits, define F(x_1,...,x_M) = OMB(f(x_1),...,f(x_M)) to be the function on M*N bits obtained by block-composing f with a function known as ODD-MAX-BIT. We show that, if f requires large degree to approximate to error 2/3 in a certain one-sided sense (captured by a complexity measure known as positive one-sided approximate degree), then F requires large degree to approximate even to error 1-2^{-M}. This generalizes a result of Beigel (Computational Complexity, 1994), who proved an identical result for the special case f=OR. Unlike related prior work, our result implies strong approximate degree lower bounds even for many functions F that have low threshold degree. Our proof is constructive: we exhibit a solution to the dual of an appropriate linear program capturing the approximate degree of any function. We describe several applications, including improved separations between the complexity classes P^{NP} and PP in both the query and communication complexity settings. Our separations improve on work of Beigel (1994) and Buhrman, Vereshchagin, and de Wolf (CCC, 2007)

    Conditional Sparse Linear Regression

    Get PDF
    Machine learning and statistics typically focus on building models that capture the vast majority of the data, possibly ignoring a small subset of data as "noise" or "outliers." By contrast, here we consider the problem of jointly identifying a significant (but perhaps small) segment of a population in which there is a highly sparse linear regression fit, together with the coefficients for the linear fit. We contend that such tasks are of interest both because the models themselves may be able to achieve better predictions in such special cases, but also because they may aid our understanding of the data. We give algorithms for such problems under the sup norm, when this unknown segment of the population is described by a k-DNF condition and the regression fit is s-sparse for constant k and s. For the variants of this problem when the regression fit is not so sparse or using expected error, we also give a preliminary algorithm and highlight the question as a challenge for future work

    Efficient Learning with Arbitrary Covariate Shift

    Full text link
    We give an efficient algorithm for learning a binary function in a given class C of bounded VC dimension, with training data distributed according to P and test data according to Q, where P and Q may be arbitrary distributions over X. This is the generic form of what is called covariate shift, which is impossible in general as arbitrary P and Q may not even overlap. However, recently guarantees were given in a model called PQ-learning (Goldwasser et al., 2020) where the learner has: (a) access to unlabeled test examples from Q (in addition to labeled samples from P, i.e., semi-supervised learning); and (b) the option to reject any example and abstain from classifying it (i.e., selective classification). The algorithm of Goldwasser et al. (2020) requires an (agnostic) noise tolerant learner for C. The present work gives a polynomial-time PQ-learning algorithm that uses an oracle to a "reliable" learner for C, where reliable learning (Kalai et al., 2012) is a model of learning with one-sided noise. Furthermore, our reduction is optimal in the sense that we show the equivalence of reliable and PQ learning

    CSP-Completeness And Its Applications

    Get PDF
    We build off of previous ideas used to study both reductions between CSPrefutation problems and improper learning and between CSP-refutation problems themselves to expand some hardness results that depend on the assumption that refuting random CSP instances are hard for certain choices of predicates (like k-SAT). First, we are able argue the hardness of the fundamental problem of learning conjunctions in a one-sided PAC-esque learning model that has appeared in several forms over the years. In this model we focus on producing a hypothesis that foremost guarantees a small false-positive rate while minimizing the false-negative rate for such hypotheses. Further, we formalize a notion of CSP-refutation reductions and CSP-refutation completeness that and use these, along with candidate CSP-refutatation complete predicates, to provide further evidence for the hardness of several problems
    corecore