4,734 research outputs found
Approximating the Noise Sensitivity of a Monotone Boolean Function
The noise sensitivity of a Boolean function f: {0,1}^n - > {0,1} is one of its fundamental properties. For noise parameter delta, the noise sensitivity is denoted as NS_{delta}[f]. This quantity is defined as follows: First, pick x = (x_1,...,x_n) uniformly at random from {0,1}^n, then pick z by flipping each x_i independently with probability delta. NS_{delta}[f] is defined to equal Pr [f(x) != f(z)]. Much of the existing literature on noise sensitivity explores the following two directions: (1) Showing that functions with low noise-sensitivity are structured in certain ways. (2) Mathematically showing that certain classes of functions have low noise sensitivity. Combined, these two research directions show that certain classes of functions have low noise sensitivity and therefore have useful structure.
The fundamental importance of noise sensitivity, together with this wealth of structural results, motivates the algorithmic question of approximating NS_{delta}[f] given an oracle access to the function f. We show that the standard sampling approach is essentially optimal for general Boolean functions. Therefore, we focus on estimating the noise sensitivity of monotone functions, which form an important subclass of Boolean functions, since many functions of interest are either monotone or can be simply transformed into a monotone function (for example the class of unate functions consists of all the functions that can be made monotone by reorienting some of their coordinates [O\u27Donnell, 2014]).
Specifically, we study the algorithmic problem of approximating NS_{delta}[f] for monotone f, given the promise that NS_{delta}[f] >= 1/n^{C} for constant C, and for delta in the range 1/n <= delta <= 1/2. For such f and delta, we give a randomized algorithm performing O((min(1,sqrt{n} delta log^{1.5} n))/(NS_{delta}[f]) poly (1/epsilon)) queries and approximating NS_{delta}[f] to within a multiplicative factor of (1 +/- epsilon). Given the same constraints on f and delta, we also prove a lower bound of Omega((min(1,sqrt{n} delta))/(NS_{delta}[f] * n^{xi})) on the query complexity of any algorithm that approximates NS_{delta}[f] to within any constant factor, where xi can be any positive constant. Thus, our algorithm\u27s query complexity is close to optimal in terms of its dependence on n.
We introduce a novel descending-ascending view of noise sensitivity, and use it as a central tool for the analysis of our algorithm. To prove lower bounds on query complexity, we develop a technique that reduces computational questions about query complexity to combinatorial questions about the existence of "thin" functions with certain properties. The existence of such "thin" functions is proved using the probabilistic method. These techniques also yield new lower bounds on the query complexity of approximating other fundamental properties of Boolean functions: the total influence and the bias
On properties of generalizations of noise sensitivity
In 1999, Benjamini et. al. published a paper in which they introduced two
definitions, noise sensitivity and noise stability, as measures of how sensitive
Boolean functions are to noise in their parameters. The parameters were assumed
to be Boolean strings, and the noise consisted of each input bit changing
their value with a small but positive probability. In the three papers appended
to this thesis, we study generalizations of these definitions to irreducible and
reversible Markov chains
The Correct Exponent for the Gotsman-Linial Conjecture
We prove a new bound on the average sensitivity of polynomial threshold
functions. In particular we show that a polynomial threshold function of degree
in at most variables has average sensitivity at most
. For fixed the exponent
in terms of in this bound is known to be optimal. This bound makes
significant progress towards the Gotsman-Linial Conjecture which would put the
correct bound at
The Average Sensitivity of an Intersection of Half Spaces
We prove new bounds on the average sensitivity of the indicator function of
an intersection of halfspaces. In particular, we prove the optimal bound of
. This generalizes a result of Nazarov, who proved the
analogous result in the Gaussian case, and improves upon a result of Harsha,
Klivans and Meka. Furthermore, our result has implications for the runtime
required to learn intersections of halfspaces
Denseness of volatile and nonvolatile sequences of functions
In a recent paper by Jonasson and Steif, definitions to describe the
volatility of sequences of Boolean functions, were introduced. We continue their study of how these definitions
relate to noise stability and noise sensitivity. Our main results are that the
set of volatile sequences of Boolean functions is a natural way "dense" in the
set of all sequences of Boolean functions, and that the set of non-volatile
Boolean sequences is not "dense" in the set of noise stable sequences of
Boolean functions.Comment: 14 pages, 2 figure
- …