4 research outputs found

    Average-Case Lower Bounds for Noisy Boolean Decision Trees

    Get PDF
    We present a new method for deriving lower bounds to the expected number of queries made by noisy decision trees computing Boolean functions. The new method has the feature that expectations are taken with respect to a uniformly distributed random input, as well as with respect to the random noise, thus yielding stronger lower bounds. It also applies to many more functions than do previous results. The method yields a simple proof of the result (previously established by Reischuk and Schmeltz) that almost all Boolean functions of n arguments require \Me(n \log n) queries, and strengthens this bound from the worst-case over inputs to the average over inputs. The method also yields bounds for specific Boolean functions in terms of their spectra (their Fourier transforms). The simplest instance of this spectral bound yields the result (previously established by Feige, Peleg, Raghavan, and Upfal) that the parity function of n arguments requires \Me(n \log n) queries and again strengthens this bound from the worst-case over inputs to the average over inputs. In its full generality, the spectral bound applies to the highly resilient functions introduced by Chor, Friedman, Goldreich, Hastad, Rudich, and Smolensky, and it yields nonlinear lower bounds whenever the resiliency is asymptotic to the number of arguments

    Noisy Computing of the OR\mathsf{OR} and MAX\mathsf{MAX} Functions

    Full text link
    We consider the problem of computing a function of nn variables using noisy queries, where each query is incorrect with some fixed and known probability p(0,1/2)p \in (0,1/2). Specifically, we consider the computation of the OR\mathsf{OR} function of nn bits (where queries correspond to noisy readings of the bits) and the MAX\mathsf{MAX} function of nn real numbers (where queries correspond to noisy pairwise comparisons). We show that an expected number of queries of (1±o(1))nlog1δDKL(p1p) (1 \pm o(1)) \frac{n\log \frac{1}{\delta}}{D_{\mathsf{KL}}(p \| 1-p)} is both sufficient and necessary to compute both functions with a vanishing error probability δ=o(1)\delta = o(1), where DKL(p1p)D_{\mathsf{KL}}(p \| 1-p) denotes the Kullback-Leibler divergence between Bern(p)\mathsf{Bern}(p) and Bern(1p)\mathsf{Bern}(1-p) distributions. Compared to previous work, our results tighten the dependence on pp in both the upper and lower bounds for the two functions

    Average-Case Lower Bounds for Noisy Boolean Decision Trees

    No full text
    corecore