363 research outputs found

    Upper Tail Estimates with Combinatorial Proofs

    Get PDF
    We study generalisations of a simple, combinatorial proof of a Chernoff bound similar to the one by Impagliazzo and Kabanets (RANDOM, 2010). In particular, we prove a randomized version of the hitting property of expander random walks and apply it to obtain a concentration bound for expander random walks which is essentially optimal for small deviations and a large number of steps. At the same time, we present a simpler proof that still yields a "right" bound settling a question asked by Impagliazzo and Kabanets. Next, we obtain a simple upper tail bound for polynomials with input variables in [0,1][0, 1] which are not necessarily independent, but obey a certain condition inspired by Impagliazzo and Kabanets. The resulting bound is used by Holenstein and Sinha (FOCS, 2012) in the proof of a lower bound for the number of calls in a black-box construction of a pseudorandom generator from a one-way function. We then show that the same technique yields the upper tail bound for the number of copies of a fixed graph in an Erd\H{o}s-R\'enyi random graph, matching the one given by Janson, Oleszkiewicz and Ruci\'nski (Israel J. Math, 2002).Comment: Full version of the paper from STACS 201

    A Matrix Expander Chernoff Bound

    Full text link
    We prove a Chernoff-type bound for sums of matrix-valued random variables sampled via a random walk on an expander, confirming a conjecture due to Wigderson and Xiao. Our proof is based on a new multi-matrix extension of the Golden-Thompson inequality which improves in some ways the inequality of Sutter, Berta, and Tomamichel, and may be of independent interest, as well as an adaptation of an argument for the scalar case due to Healy. Secondarily, we also provide a generic reduction showing that any concentration inequality for vector-valued martingales implies a concentration inequality for the corresponding expander walk, with a weakening of parameters proportional to the squared mixing time.Comment: Fixed a minor bug in the proof of Theorem 3.

    Storage and Search in Dynamic Peer-to-Peer Networks

    Full text link
    We study robust and efficient distributed algorithms for searching, storing, and maintaining data in dynamic Peer-to-Peer (P2P) networks. P2P networks are highly dynamic networks that experience heavy node churn (i.e., nodes join and leave the network continuously over time). Our goal is to guarantee, despite high node churn rate, that a large number of nodes in the network can store, retrieve, and maintain a large number of data items. Our main contributions are fast randomized distributed algorithms that guarantee the above with high probability (whp) even under high adversarial churn: 1. A randomized distributed search algorithm that (whp) guarantees that searches from as many as no(n)n - o(n) nodes (nn is the stable network size) succeed in O(logn){O}(\log n)-rounds despite O(n/log1+δn){O}(n/\log^{1+\delta} n) churn, for any small constant δ>0\delta > 0, per round. We assume that the churn is controlled by an oblivious adversary (that has complete knowledge and control of what nodes join and leave and at what time, but is oblivious to the random choices made by the algorithm). 2. A storage and maintenance algorithm that guarantees (whp) data items can be efficiently stored (with only Θ(logn)\Theta(\log{n}) copies of each data item) and maintained in a dynamic P2P network with churn rate up to O(n/log1+δn){O}(n/\log^{1+\delta} n) per round. Our search algorithm together with our storage and maintenance algorithm guarantees that as many as no(n)n - o(n) nodes can efficiently store, maintain, and search even under O(n/log1+δn){O}(n/\log^{1+\delta} n) churn per round. Our algorithms require only polylogarithmic in nn bits to be processed and sent (per round) by each node. To the best of our knowledge, our algorithms are the first-known, fully-distributed storage and search algorithms that provably work under highly dynamic settings (i.e., high churn rates per step).Comment: to appear at SPAA 201

    Chernoff Bound for High-Dimensional Expanders

    Get PDF

    Gossip vs. Markov Chains, and Randomness-Efficient Rumor Spreading

    Get PDF
    We study gossip algorithms for the rumor spreading problem which asks one node to deliver a rumor to all nodes in an unknown network. We present the first protocol for any expander graph GG with nn nodes such that, the protocol informs every node in O(logn)O(\log n) rounds with high probability, and uses O~(logn)\tilde{O}(\log n) random bits in total. The runtime of our protocol is tight, and the randomness requirement of O~(logn)\tilde{O}(\log n) random bits almost matches the lower bound of Ω(logn)\Omega(\log n) random bits for dense graphs. We further show that, for many graph families, polylogarithmic number of random bits in total suffice to spread the rumor in O(polylogn)O(\mathrm{poly}\log n) rounds. These results together give us an almost complete understanding of the randomness requirement of this fundamental gossip process. Our analysis relies on unexpectedly tight connections among gossip processes, Markov chains, and branching programs. First, we establish a connection between rumor spreading processes and Markov chains, which is used to approximate the rumor spreading time by the mixing time of Markov chains. Second, we show a reduction from rumor spreading processes to branching programs, and this reduction provides a general framework to derandomize gossip processes. In addition to designing rumor spreading protocols, these novel techniques may have applications in studying parallel and multiple random walks, and randomness complexity of distributed algorithms.Comment: 41 pages, 1 figure. arXiv admin note: substantial text overlap with arXiv:1304.135

    On the trace of random walks on random graphs

    Full text link
    We study graph-theoretic properties of the trace of a random walk on a random graph. We show that for any ε>0\varepsilon>0 there exists C>1C>1 such that the trace of the simple random walk of length (1+ε)nlnn(1+\varepsilon)n\ln{n} on the random graph GG(n,p)G\sim G(n,p) for p>Clnn/np>C\ln{n}/n is, with high probability, Hamiltonian and Θ(lnn)\Theta(\ln{n})-connected. In the special case p=1p=1 (i.e. when G=KnG=K_n), we show a hitting time result according to which, with high probability, exactly one step after the last vertex has been visited, the trace becomes Hamiltonian, and one step after the last vertex has been visited for the kk'th time, the trace becomes 2k2k-connected.Comment: 32 pages, revised versio
    corecore