We prove the first non-trivial (super linear) lower bound in the noisy broadcast model, defined by El Gamal in . In this model there are n + 1 processors P0, P1,..., Pn, each of which is initially given a private input bit xi. The goal is for P0 to learn the value of f (x1,..., xn), for some specified function f, using a series of noisy broadcasts. At each step a designated processor broadcasts one bit to all of the other processors, and the bit received by each processor is flipped with fixed probability (independently for each recipient). In 1988, Gallager  gave a noise-resistant protocol that allows P0 to learn the entire input with constant probability in O(n log log n) broadcasts. We prove that Gallager's protocol is optimal, up to a constant factor. Our lower bound follows by reduction from a lower bound for generalized noisy decision trees, a new model which may be of independent interest. For this new model we show a lower bound of \Omega (n log n) on the depth of a tree that learns the entire input. We als
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.