28 research outputs found

    A polynomial lower bound for testing monotonicity

    Get PDF
    We show that every algorithm for testing n-variate Boolean functions for monotonicity has query complexity Ω(n1/4). All previous lower bounds for this problem were designed for nonadaptive algorithms and, as a result, the best previous lower bound for general (possibly adaptive) monotonicity testers was only Ω(logn). Combined with the query complexity of the non-adaptive monotonicity tester of Khot, Minzer, and Safra (FOCS 2015), our lower bound shows that adaptivity can result in at most a quadratic reduction in the query complexity for testing monotonicity. By contrast, we show that there is an exponential gap between the query complexity of adaptive and non-adaptive algorithms for testing regular linear threshold functions (LTFs) for monotonicity. Chen, De, Servedio, and Tan (STOC 2015) recently showed that non-adaptive algorithms require almost Ω(n1/2) queries for this task. We introduce a new adaptive monotonicity testing algorithm which has query complexity O(logn) when the input is a regular LTF

    An ~O(n) Queries Adaptive Tester for Unateness

    Get PDF
    We present an adaptive tester for the unateness property of Boolean functions. Given a function f:{0,1}^n -> {0,1} the tester makes O(n log(n)/epsilon) adaptive queries to the function. The tester always accepts a unate function, and rejects with probability at least 0.9 if a function is epsilon-far from being unate

    Approximating the Noise Sensitivity of a Monotone Boolean Function

    Get PDF
    The noise sensitivity of a Boolean function f: {0,1}^n - > {0,1} is one of its fundamental properties. For noise parameter delta, the noise sensitivity is denoted as NS_{delta}[f]. This quantity is defined as follows: First, pick x = (x_1,...,x_n) uniformly at random from {0,1}^n, then pick z by flipping each x_i independently with probability delta. NS_{delta}[f] is defined to equal Pr [f(x) != f(z)]. Much of the existing literature on noise sensitivity explores the following two directions: (1) Showing that functions with low noise-sensitivity are structured in certain ways. (2) Mathematically showing that certain classes of functions have low noise sensitivity. Combined, these two research directions show that certain classes of functions have low noise sensitivity and therefore have useful structure. The fundamental importance of noise sensitivity, together with this wealth of structural results, motivates the algorithmic question of approximating NS_{delta}[f] given an oracle access to the function f. We show that the standard sampling approach is essentially optimal for general Boolean functions. Therefore, we focus on estimating the noise sensitivity of monotone functions, which form an important subclass of Boolean functions, since many functions of interest are either monotone or can be simply transformed into a monotone function (for example the class of unate functions consists of all the functions that can be made monotone by reorienting some of their coordinates [O\u27Donnell, 2014]). Specifically, we study the algorithmic problem of approximating NS_{delta}[f] for monotone f, given the promise that NS_{delta}[f] >= 1/n^{C} for constant C, and for delta in the range 1/n <= delta <= 1/2. For such f and delta, we give a randomized algorithm performing O((min(1,sqrt{n} delta log^{1.5} n))/(NS_{delta}[f]) poly (1/epsilon)) queries and approximating NS_{delta}[f] to within a multiplicative factor of (1 +/- epsilon). Given the same constraints on f and delta, we also prove a lower bound of Omega((min(1,sqrt{n} delta))/(NS_{delta}[f] * n^{xi})) on the query complexity of any algorithm that approximates NS_{delta}[f] to within any constant factor, where xi can be any positive constant. Thus, our algorithm\u27s query complexity is close to optimal in terms of its dependence on n. We introduce a novel descending-ascending view of noise sensitivity, and use it as a central tool for the analysis of our algorithm. To prove lower bounds on query complexity, we develop a technique that reduces computational questions about query complexity to combinatorial questions about the existence of "thin" functions with certain properties. The existence of such "thin" functions is proved using the probabilistic method. These techniques also yield new lower bounds on the query complexity of approximating other fundamental properties of Boolean functions: the total influence and the bias

    Adaptive Lower Bound for Testing Monotonicity on the Line

    Get PDF
    In the property testing model, the task is to distinguish objects possessing some property from the objects that are far from it. One of such properties is monotonicity, when the objects are functions from one poset to another. This is an active area of research. In this paper we study query complexity of epsilon-testing monotonicity of a function f : [n]->[r]. All our lower bounds are for adaptive two-sided testers. - We prove a nearly tight lower bound for this problem in terms of r. The bound is Omega((log r)/(log log r)) when epsilon = 1/2. No previous satisfactory lower bound in terms of r was known. - We completely characterise query complexity of this problem in terms of n for smaller values of epsilon. The complexity is Theta(epsilon^{-1} log (epsilon n)). Apart from giving the lower bound, this improves on the best known upper bound. Finally, we give an alternative proof of the Omega(epsilon^{-1}d log n - epsilon^{-1}log epsilon^{-1}) lower bound for testing monotonicity on the hypergrid [n]^d due to Chakrabarty and Seshadhri (RANDOM\u2713)

    Optimal Unateness Testers for Real-Valued Functions: Adaptivity Helps

    Get PDF
    We study the problem of testing unateness of functions f:{0,1}^d -> R. We give an O(d/epsilon . log(d/epsilon))-query nonadaptive tester and an O(d/epsilon)-query adaptive tester and show that both testers are optimal for a fixed distance parameter epsilon. Previously known unateness testers worked only for Boolean functions, and their query complexity had worse dependence on the dimension both for the adaptive and the nonadaptive case. Moreover, no lower bounds for testing unateness were known. We generalize our results to obtain optimal unateness testers for functions f:[n]^d -> R. Our results establish that adaptivity helps with testing unateness of real-valued functions on domains of the form {0,1}^d and, more generally, [n]^d. This stands in contrast to the situation for monotonicity testing where there is no adaptivity gap for functions f:[n]^d -> R

    Almost Optimal Distribution-Free Junta Testing

    Get PDF
    We consider the problem of testing whether an unknown n-variable Boolean function is a k-junta in the distribution-free property testing model, where the distance between functions is measured with respect to an arbitrary and unknown probability distribution over {0,1}^n. Chen, Liu, Servedio, Sheng and Xie [Zhengyang Liu et al., 2018] showed that the distribution-free k-junta testing can be performed, with one-sided error, by an adaptive algorithm that makes O~(k^2)/epsilon queries. In this paper, we give a simple two-sided error adaptive algorithm that makes O~(k/epsilon) queries

    Settling the Query Complexity of Non-Adaptive Junta Testing

    Get PDF
    We prove that any non-adaptive algorithm that tests whether an unknown Boolean function f is a k-junta or epsilon-far from every k-junta must make ~Omega(k^{3/2}/ epsilon) many queries for a wide range of parameters k and epsilon. Our result dramatically improves previous lower bounds from [BGSMdW13,STW15], and is essentially optimal given Blais\u27s non-adaptive junta tester from [Blais08], which makes ~O(k^{3/2})/epsilon queries. Combined with the adaptive tester of [Blais09] which makes O(k log k + k / epsilon) queries, our result shows that adaptivity enables polynomial savings in query complexity for junta testing
    corecore