508 research outputs found

    Simple and Optimal Randomized Fault-Tolerant Rumor Spreading

    Full text link
    We revisit the classic problem of spreading a piece of information in a group of nn fully connected processors. By suitably adding a small dose of randomness to the protocol of Gasienic and Pelc (1996), we derive for the first time protocols that (i) use a linear number of messages, (ii) are correct even when an arbitrary number of adversarially chosen processors does not participate in the process, and (iii) with high probability have the asymptotically optimal runtime of O(logn)O(\log n) when at least an arbitrarily small constant fraction of the processors are working. In addition, our protocols do not require that the system is synchronized nor that all processors are simultaneously woken up at time zero, they are fully based on push-operations, and they do not need an a priori estimate on the number of failed nodes. Our protocols thus overcome the typical disadvantages of the two known approaches, algorithms based on random gossip (typically needing a large number of messages due to their unorganized nature) and algorithms based on fair workload splitting (which are either not {time-efficient} or require intricate preprocessing steps plus synchronization).Comment: This is the author-generated version of a paper which is to appear in Distributed Computing, Springer, DOI: 10.1007/s00446-014-0238-z It is available online from http://link.springer.com/article/10.1007/s00446-014-0238-z This version contains some new results (Section 6

    Optimal Gossip with Direct Addressing

    Full text link
    Gossip algorithms spread information by having nodes repeatedly forward information to a few random contacts. By their very nature, gossip algorithms tend to be distributed and fault tolerant. If done right, they can also be fast and message-efficient. A common model for gossip communication is the random phone call model, in which in each synchronous round each node can PUSH or PULL information to or from a random other node. For example, Karp et al. [FOCS 2000] gave algorithms in this model that spread a message to all nodes in Θ(logn)\Theta(\log n) rounds while sending only O(loglogn)O(\log \log n) messages per node on average. Recently, Avin and Els\"asser [DISC 2013], studied the random phone call model with the natural and commonly used assumption of direct addressing. Direct addressing allows nodes to directly contact nodes whose ID (e.g., IP address) was learned before. They show that in this setting, one can "break the logn\log n barrier" and achieve a gossip algorithm running in O(logn)O(\sqrt{\log n}) rounds, albeit while using O(logn)O(\sqrt{\log n}) messages per node. We study the same model and give a simple gossip algorithm which spreads a message in only O(loglogn)O(\log \log n) rounds. We also prove a matching Ω(loglogn)\Omega(\log \log n) lower bound which shows that this running time is best possible. In particular we show that any gossip algorithm takes with high probability at least 0.99loglogn0.99 \log \log n rounds to terminate. Lastly, our algorithm can be tweaked to send only O(1)O(1) messages per node on average with only O(logn)O(\log n) bits per message. Our algorithm therefore simultaneously achieves the optimal round-, message-, and bit-complexity for this setting. As all prior gossip algorithms, our algorithm is also robust against failures. In particular, if in the beginning an oblivious adversary fails any FF nodes our algorithm still, with high probability, informs all but o(F)o(F) surviving nodes

    Bit-complexité des protocoles de gossip

    Get PDF
    National audienceNous étudions le problème du \emph{gossip} (i.e., diffusion de rumeurs) dans le modèle des appels aléatoires. Considérons nn noeuds communiquant en parallèle par étape. A chaque étape, un ensemble (potentiellement vide) de \emph{rumeurs} est généré à chaque noeud, la même rumeur pouvant être générée simultanément par plusieurs noeuds. L'objectif est de diffuser ces rumeurs à tous les noeuds. Pour ce faire, à chaque étape, chaque noeud appelle un autre noeud choisi uniformément aléatoirement parmi l'ensemble de tous les noeuds, et un noeud ne peut alors communiquer qu'avec le noeud qu'il a appelé, et les noeuds qui l'ont potentiellement appelé. Dans ce modèle, Karp et ses co-auteurs~\cite{Karp2000} ont montré qu'aucun algorithme de gossip ne peut être à la fois optimal en temps (i.e., s'exécuter en O(logn)O(\log n) étapes) et en volume de communication (i.e., s'exécuter en transmettant au plus O(n)O(n) messages). En particulier, ils ont montré que tout algorithme de gossip n'utilisant pas les IDs des noeuds et diffusant toute rumeur en O(logn)O(\log n) étapes doit échanger Ω(nloglogn)\Omega(n\log\log n) messages par rumeur. Karp et ses co-auteurs ont également montré que ce compromis peut être atteint. Dans cet article, nous étudions le volume de communication estimé en nombre de bits échangés plutôt qu'en nombre de messages. Nous montrons tout d'abord que tout algorithme de gossip n'utilisant pas les IDs des noeuds et diffusant toute rumeur en O(logn)O(\log n) étapes doit échanger Ω(n(b+loglogn))\Omega(n (b+\log\log n)) bits pour diffuser une rumeur de bb bits. Nous proposons alors un algorithme de gossip n'utilisant pas les IDs des noeuds qui diffuse toute rumeur en O(logn)O(\log n) étapes, en échangeant O(n(b+loglognlogb))O(n(b+\log\log n\log b)) bits pour une rumeur de bb bits. Ces résultats démontrent que contrairement à ce qu'il peut sembler lorsque l'on mesure le volume de communication en nombre de messages, il est possible d'être à la fois optimal en temps (i.e., s'exécuter en O(logn)O(\log n) étapes) et en volume de communication (i.e., s'exécuter en transmettant au plus O(nb)O(nb) bits), sauf pour des rumeurs extrêmement petites, de taille bloglognlogloglognb\ll\log\log n \log\log\log n bits

    Gossip in a Smartphone Peer-to-Peer Network

    Full text link
    In this paper, we study the fundamental problem of gossip in the mobile telephone model: a recently introduced variation of the classical telephone model modified to better describe the local peer-to-peer communication services implemented in many popular smartphone operating systems. In more detail, the mobile telephone model differs from the classical telephone model in three ways: (1) each device can participate in at most one connection per round; (2) the network topology can undergo a parameterized rate of change; and (3) devices can advertise a parameterized number of bits about their state to their neighbors in each round before connection attempts are initiated. We begin by describing and analyzing new randomized gossip algorithms in this model under the harsh assumption of a network topology that can change completely in every round. We prove a significant time complexity gap between the case where nodes can advertise 00 bits to their neighbors in each round, and the case where nodes can advertise 11 bit. For the latter assumption, we present two solutions: the first depends on a shared randomness source, while the second eliminates this assumption using a pseudorandomness generator we prove to exist with a novel generalization of a classical result from the study of two-party communication complexity. We then turn our attention to the easier case where the topology graph is stable, and describe and analyze a new gossip algorithm that provides a substantial performance improvement for many parameters. We conclude by studying a relaxed version of gossip in which it is only necessary for nodes to each learn a specified fraction of the messages in the system.Comment: Extended Abstract to Appear in the Proceedings of the ACM Conference on the Principles of Distributed Computing (PODC 2017

    Global Computation in a Poorly Connected World: Fast Rumor Spreading with No Dependence on Conductance

    Get PDF
    In this paper, we study the question of how efficiently a collection of interconnected nodes can perform a global computation in the widely studied GOSSIP model of communication. In this model, nodes do not know the global topology of the network, and they may only initiate contact with a single neighbor in each round. This model contrasts with the much less restrictive LOCAL model, where a node may simultaneously communicate with all of its neighbors in a single round. A basic question in this setting is how many rounds of communication are required for the information dissemination problem, in which each node has some piece of information and is required to collect all others. In this paper, we give an algorithm that solves the information dissemination problem in at most O(D+polylog(n))O(D+\text{polylog}{(n)}) rounds in a network of diameter DD, withno dependence on the conductance. This is at most an additive polylogarithmic factor from the trivial lower bound of DD, which applies even in the LOCAL model. In fact, we prove that something stronger is true: any algorithm that requires TT rounds in the LOCAL model can be simulated in O(T+polylog(n))O(T +\mathrm{polylog}(n)) rounds in the GOSSIP model. We thus prove that these two models of distributed computation are essentially equivalent

    Minimizing Message Size in Stochastic Communication Patterns: Fast Self-Stabilizing Protocols with 3 bits

    Get PDF
    This paper considers the basic PULL\mathcal{PULL} model of communication, in which in each round, each agent extracts information from few randomly chosen agents. We seek to identify the smallest amount of information revealed in each interaction (message size) that nevertheless allows for efficient and robust computations of fundamental information dissemination tasks. We focus on the Majority Bit Dissemination problem that considers a population of nn agents, with a designated subset of source agents. Each source agent holds an input bit and each agent holds an output bit. The goal is to let all agents converge their output bits on the most frequent input bit of the sources (the majority bit). Note that the particular case of a single source agent corresponds to the classical problem of Broadcast. We concentrate on the severe fault-tolerant context of self-stabilization, in which a correct configuration must be reached eventually, despite all agents starting the execution with arbitrary initial states. We first design a general compiler which can essentially transform any self-stabilizing algorithm with a certain property that uses \ell-bits messages to one that uses only log\log \ell-bits messages, while paying only a small penalty in the running time. By applying this compiler recursively we then obtain a self-stabilizing Clock Synchronization protocol, in which agents synchronize their clocks modulo some given integer TT, within O~(lognlogT)\tilde O(\log n\log T) rounds w.h.p., and using messages that contain 33 bits only. We then employ the new Clock Synchronization tool to obtain a self-stabilizing Majority Bit Dissemination protocol which converges in O~(logn)\tilde O(\log n) time, w.h.p., on every initial configuration, provided that the ratio of sources supporting the minority opinion is bounded away from half. Moreover, this protocol also uses only 3 bits per interaction.Comment: 28 pages, 4 figure
    corecore