10,628 research outputs found
Approximating the Noise Sensitivity of a Monotone Boolean Function
The noise sensitivity of a Boolean function f: {0,1}^n - > {0,1} is one of its fundamental properties. For noise parameter delta, the noise sensitivity is denoted as NS_{delta}[f]. This quantity is defined as follows: First, pick x = (x_1,...,x_n) uniformly at random from {0,1}^n, then pick z by flipping each x_i independently with probability delta. NS_{delta}[f] is defined to equal Pr [f(x) != f(z)]. Much of the existing literature on noise sensitivity explores the following two directions: (1) Showing that functions with low noise-sensitivity are structured in certain ways. (2) Mathematically showing that certain classes of functions have low noise sensitivity. Combined, these two research directions show that certain classes of functions have low noise sensitivity and therefore have useful structure.
The fundamental importance of noise sensitivity, together with this wealth of structural results, motivates the algorithmic question of approximating NS_{delta}[f] given an oracle access to the function f. We show that the standard sampling approach is essentially optimal for general Boolean functions. Therefore, we focus on estimating the noise sensitivity of monotone functions, which form an important subclass of Boolean functions, since many functions of interest are either monotone or can be simply transformed into a monotone function (for example the class of unate functions consists of all the functions that can be made monotone by reorienting some of their coordinates [O\u27Donnell, 2014]).
Specifically, we study the algorithmic problem of approximating NS_{delta}[f] for monotone f, given the promise that NS_{delta}[f] >= 1/n^{C} for constant C, and for delta in the range 1/n <= delta <= 1/2. For such f and delta, we give a randomized algorithm performing O((min(1,sqrt{n} delta log^{1.5} n))/(NS_{delta}[f]) poly (1/epsilon)) queries and approximating NS_{delta}[f] to within a multiplicative factor of (1 +/- epsilon). Given the same constraints on f and delta, we also prove a lower bound of Omega((min(1,sqrt{n} delta))/(NS_{delta}[f] * n^{xi})) on the query complexity of any algorithm that approximates NS_{delta}[f] to within any constant factor, where xi can be any positive constant. Thus, our algorithm\u27s query complexity is close to optimal in terms of its dependence on n.
We introduce a novel descending-ascending view of noise sensitivity, and use it as a central tool for the analysis of our algorithm. To prove lower bounds on query complexity, we develop a technique that reduces computational questions about query complexity to combinatorial questions about the existence of "thin" functions with certain properties. The existence of such "thin" functions is proved using the probabilistic method. These techniques also yield new lower bounds on the query complexity of approximating other fundamental properties of Boolean functions: the total influence and the bias
Noise Sensitivity of Boolean Functions and Applications to Percolation
It is shown that a large class of events in a product probability space are
highly sensitive to noise, in the sense that with high probability, the
configuration with an arbitrary small percent of random errors gives almost no
prediction whether the event occurs. On the other hand, weighted majority
functions are shown to be noise-stable. Several necessary and sufficient
conditions for noise sensitivity and stability are given.
Consider, for example, bond percolation on an by grid. A
configuration is a function that assigns to every edge the value 0 or 1. Let
be a random configuration, selected according to the uniform measure.
A crossing is a path that joins the left and right sides of the rectangle, and
consists entirely of edges with . By duality, the probability
for having a crossing is 1/2. Fix an . For each edge , let
with probability , and
with probability , independently of the
other edges. Let be the probability for having a crossing in
, conditioned on . Then for all sufficiently large,
.Comment: To appear in Inst. Hautes Etudes Sci. Publ. Mat
Strong noise sensitivity and random graphs
The noise sensitivity of a Boolean function describes its likelihood to flip
under small perturbations of its input. Introduced in the seminal work of
Benjamini, Kalai and Schramm [Inst. Hautes \'{E}tudes Sci. Publ. Math. 90
(1999) 5-43], it was there shown to be governed by the first level of Fourier
coefficients in the central case of monotone functions at a constant critical
probability . Here we study noise sensitivity and a natural stronger
version of it, addressing the effect of noise given a specific witness in the
original input. Our main context is the Erd\H{o}s-R\'{e}nyi random graph, where
already the property of containing a given graph is sufficiently rich to
separate these notions. In particular, our analysis implies (strong) noise
sensitivity in settings where the BKS criterion involving the first Fourier
level does not apply, for example, when polynomially fast in the
number of variables.Comment: Published at http://dx.doi.org/10.1214/14-AOP959 in the Annals of
Probability (http://www.imstat.org/aop/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Differential Privacy of Aggregated DC Optimal Power Flow Data
We consider the problem of privately releasing aggregated network statistics
obtained from solving a DC optimal power flow (OPF) problem. It is shown that
the mechanism that determines the noise distribution parameters are linked to
the topology of the power system and the monotonicity of the network. We derive
a measure of "almost" monotonicity and show how it can be used in conjunction
with a linear program in order to release aggregated OPF data using the
differential privacy framework.Comment: Accepted by 2019 American Control Conference (ACC
Three Puzzles on Mathematics, Computation, and Games
In this lecture I will talk about three mathematical puzzles involving
mathematics and computation that have preoccupied me over the years. The first
puzzle is to understand the amazing success of the simplex algorithm for linear
programming. The second puzzle is about errors made when votes are counted
during elections. The third puzzle is: are quantum computers possible?Comment: ICM 2018 plenary lecture, Rio de Janeiro, 36 pages, 7 Figure
- …