2 research outputs found
Nearly Optimal Bounds for Sample-Based Testing and Learning of -Monotone Functions
We study monotonicity testing of functions
using sample-based algorithms, which are only allowed to observe the value of
on points drawn independently from the uniform distribution. A classic
result by Bshouty-Tamon (J. ACM 1996) proved that monotone functions can be
learned with samples and it
is not hard to show that this bound extends to testing. Prior to our work the
only lower bound for this problem was in
the small parameter regime, when , due
to Goldreich-Goldwasser-Lehman-Ron-Samorodnitsky (Combinatorica 2000). Thus,
the sample complexity of monotonicity testing was wide open for . We resolve this question, obtaining a tight lower bound of
for all
at most a sufficiently small constant. In fact, we prove a much more general
result, showing that the sample complexity of -monotonicity testing and
learning for functions is
. For testing with
one-sided error we show that the sample complexity is .
Beyond the hypercube, we prove nearly tight bounds (up to polylog factors of
in the exponent) of
on the
sample complexity of testing and learning measurable -monotone functions under product distributions. Our upper bound
improves upon the previous bound of
by
Harms-Yoshida (ICALP 2022) for Boolean functions ()