91 research outputs found
Isoperimetric Inequalities for Real-Valued Functions with Applications to Monotonicity Testing
We generalize the celebrated isoperimetric inequality of Khot, Minzer, and Safra (SICOMP 2018) for Boolean functions to the case of real-valued functions f:{0,1}^d ? ?. Our main tool in the proof of the generalized inequality is a new Boolean decomposition that represents every real-valued function f over an arbitrary partially ordered domain as a collection of Boolean functions over the same domain, roughly capturing the distance of f to monotonicity and the structure of violations of f to monotonicity.
We apply our generalized isoperimetric inequality to improve algorithms for testing monotonicity and approximating the distance to monotonicity for real-valued functions. Our tester for monotonicity has query complexity O?(min(r ?d,d)), where r is the size of the image of the input function. (The best previously known tester makes O(d) queries, as shown by Chakrabarty and Seshadhri (STOC 2013).) Our tester is nonadaptive and has 1-sided error. We prove a matching lower bound for nonadaptive, 1-sided error testers for monotonicity. We also show that the distance to monotonicity of real-valued functions that are ?-far from monotone can be approximated nonadaptively within a factor of O(?{d log d}) with query complexity polynomial in 1/? and the dimension d. This query complexity is known to be nearly optimal for nonadaptive algorithms even for the special case of Boolean functions. (The best previously known distance approximation algorithm for real-valued functions, by Fattal and Ron (TALG 2010) achieves O(d log r)-approximation.
Adaptive Boolean Monotonicity Testing in Total Influence Time
Testing monotonicity of a Boolean function f:{0,1}^n -> {0,1} is an important problem in the field of property testing. It has led to connections with many interesting combinatorial questions on the directed hypercube: routing, random walks, and new isoperimetric theorems. Denoting the proximity parameter by epsilon, the best tester is the non-adaptive O~(epsilon^{-2}sqrt{n}) tester of Khot-Minzer-Safra (FOCS 2015). A series of recent results by Belovs-Blais (STOC 2016) and Chen-Waingarten-Xie (STOC 2017) have led to Omega~(n^{1/3}) lower bounds for adaptive testers. Reducing this gap is a significant question, that touches on the role of adaptivity in monotonicity testing of Boolean functions.
We approach this question from the perspective of parametrized property testing, a concept recently introduced by Pallavoor-Raskhodnikova-Varma (ACM TOCT 2017), where one seeks to understand performance of testers with respect to parameters other than just the size. Our result is an adaptive monotonicity tester with one-sided error whose query complexity is O(epsilon^{-2}I(f)log^5 n), where I(f) is the total influence of the function. Therefore, adaptivity provably helps monotonicity testing for low influence functions
Monotonicity Testing for Boolean Functions over Graph Products
We establish a directed analogue of Chung and Tetali's isoperimetric inequality for graph products. We use this inequality to obtain new bounds on the query complexity for testing monotonicity of Boolean-valued functions over products of general posets
Nearly Optimal Bounds for Sample-Based Testing and Learning of -Monotone Functions
We study monotonicity testing of functions
using sample-based algorithms, which are only allowed to observe the value of
on points drawn independently from the uniform distribution. A classic
result by Bshouty-Tamon (J. ACM 1996) proved that monotone functions can be
learned with samples and it
is not hard to show that this bound extends to testing. Prior to our work the
only lower bound for this problem was in
the small parameter regime, when , due
to Goldreich-Goldwasser-Lehman-Ron-Samorodnitsky (Combinatorica 2000). Thus,
the sample complexity of monotonicity testing was wide open for . We resolve this question, obtaining a tight lower bound of
for all
at most a sufficiently small constant. In fact, we prove a much more general
result, showing that the sample complexity of -monotonicity testing and
learning for functions is
. For testing with
one-sided error we show that the sample complexity is .
Beyond the hypercube, we prove nearly tight bounds (up to polylog factors of
in the exponent) of
on the
sample complexity of testing and learning measurable -monotone functions under product distributions. Our upper bound
improves upon the previous bound of
by
Harms-Yoshida (ICALP 2022) for Boolean functions ()
- …