17 research outputs found

    On Extracting Common Random Bits From Correlated Sources

    Get PDF
    Suppose Alice and Bob receive strings of unbiased independent but noisy bits from some random source. They wish to use their respective strings to extract a common sequence of random bits with high probability but without communicating. How many such bits can they extract? The trivial strategy of outputting the first k bits yields an agreement probability of (1-ε)k\u3c 2 −1.44kε, where ε is the amount of noise. We show that no strategy can achieve agreement probability better than 2−kε/(1−ε). On the other hand, we show that when k ≥ 10 + 2(1 − ε)/ε, there exists a strategy which achieves an agreement probability of 0.003(kε)−1/2 · 2−kε/(1−ε)

    Boolean functions: noise stability, non-interactive correlation distillation, and mutual information

    Full text link
    Let TϵT_{\epsilon} be the noise operator acting on Boolean functions f:{0,1}n→{0,1}f:\{0, 1\}^n\to \{0, 1\}, where ϵ∈[0,1/2]\epsilon\in[0, 1/2] is the noise parameter. Given α>1\alpha>1 and fixed mean Ef\mathbb{E} f, which Boolean function ff has the largest α\alpha-th moment E(Tϵf)α\mathbb{E}(T_\epsilon f)^\alpha? This question has close connections with noise stability of Boolean functions, the problem of non-interactive correlation distillation, and Courtade-Kumar's conjecture on the most informative Boolean function. In this paper, we characterize maximizers in some extremal settings, such as low noise (ϵ=ϵ(n)\epsilon=\epsilon(n) is close to 0), high noise (ϵ=ϵ(n)\epsilon=\epsilon(n) is close to 1/2), as well as when α=α(n)\alpha=\alpha(n) is large. Analogous results are also established in more general contexts, such as Boolean functions defined on discrete torus (Z/pZ)n(\mathbb{Z}/p\mathbb{Z})^n and the problem of noise stability in a tree model.Comment: Corrections of some inaccuracie

    On properties of generalizations of noise sensitivity

    Get PDF
    In 1999, Benjamini et. al. published a paper in which they introduced two definitions, noise sensitivity and noise stability, as measures of how sensitive Boolean functions are to noise in their parameters. The parameters were assumed to be Boolean strings, and the noise consisted of each input bit changing their value with a small but positive probability. In the three papers appended to this thesis, we study generalizations of these definitions to irreducible and reversible Markov chains

    Tight Bounds for Communication-Assisted Agreement Distillation

    Get PDF
    Suppose Alice holds a uniformly random string X in {0,1}^N and Bob holds a noisy version Y of X where each bit of X is flipped independently with probability epsilon in [0,1/2]. Alice and Bob would like to extract a common random string of min-entropy at least k. In this work, we establish the communication versus success probability trade-off for this problem by giving a protocol and a matching lower bound (under the restriction that the string to be agreed upon is determined by Alice\u27s input X). Specifically, we prove that in order for Alice and Bob to agree on a common string with probability 2^{-gamma k} (gamma k >= 1), the optimal communication (up to o(k) terms, and achievable for large N) is precisely (C *(1-gamma) - 2 * sqrt{ C * (1-C) gamma}) * k, where C := 4 * epsilon * (1-epsilon). In particular, the optimal communication to achieve Omega(1) agreement probability approaches 4 * epsilon * (1-epsilon) * k. We also consider the case when Y is the output of the binary erasure channel on X, where each bit of Y equals the corresponding bit of X with probability 1-epsilon and is otherwise erased (that is, replaced by a "?"). In this case, the communication required becomes (epsilon * (1-gamma) - 2 * sqrt{ epsilon * (1-epsilon) * gamma}) * k. In particular, the optimal communication to achieve Omega(1) agreement probability approaches epsilon * k, and with no communication the optimal agreement probability approaches 2^{- (1-sqrt{1-epsilon})/(1+sqrt{1-epsilon}) * k}. Our protocols are based on covering codes and extend the approach of (Bogdanov and Mossel, 2011) for the zero-communication case. Our lower bounds rely on hypercontractive inequalities. For the model of bit-flips, our argument extends the approach of (Bogdanov and Mossel, 2011) by allowing communication; for the erasure model, to the best of our knowledge the needed hypercontractivity statement was not studied before, and it was established (given our application) by (Nair and Wang 2015). We also obtain information complexity lower bounds for these tasks, and together with our protocol, they shed light on the recently popular "most informative Boolean function" conjecture of Courtade and Kumar
    corecore