20,133 research outputs found

    Consensus with Max Registers

    Get PDF
    We consider the problem of implementing randomized wait-free consensus from max registers under the assumption of an oblivious adversary. We show that max registers solve m-valued consensus for arbitrary m in expected O(log^* n) steps per process, beating the Omega(log m/log log m) lower bound for ordinary registers when m is large and the best previously known O(log log n) upper bound when m is small. A simple max-register implementation based on double-collect snapshots translates this result into an O(n log n) expected step implementation of m-valued consensus from n single-writer registers, improving on the best previously-known bound of O(n log^2 n) for single-writer registers

    Fast Deterministic Consensus in a Noisy Environment

    Full text link
    It is well known that the consensus problem cannot be solved deterministically in an asynchronous environment, but that randomized solutions are possible. We propose a new model, called noisy scheduling, in which an adversarial schedule is perturbed randomly, and show that in this model randomness in the environment can substitute for randomness in the algorithm. In particular, we show that a simplified, deterministic version of Chandra's wait-free shared-memory consensus algorithm (PODC, 1996, pp. 166-175) solves consensus in time at most logarithmic in the number of active processes. The proof of termination is based on showing that a race between independent delayed renewal processes produces a winner quickly. In addition, we show that the protocol finishes in constant time using quantum and priority-based scheduling on a uniprocessor, suggesting that it is robust against the choice of model over a wide range.Comment: Typographical errors fixe

    Randomized protocols for asynchronous consensus

    Full text link
    The famous Fischer, Lynch, and Paterson impossibility proof shows that it is impossible to solve the consensus problem in a natural model of an asynchronous distributed system if even a single process can fail. Since its publication, two decades of work on fault-tolerant asynchronous consensus algorithms have evaded this impossibility result by using extended models that provide (a) randomization, (b) additional timing assumptions, (c) failure detectors, or (d) stronger synchronization mechanisms than are available in the basic model. Concentrating on the first of these approaches, we illustrate the history and structure of randomized asynchronous consensus protocols by giving detailed descriptions of several such protocols.Comment: 29 pages; survey paper written for PODC 20th anniversary issue of Distributed Computin

    Rational Fair Consensus in the GOSSIP Model

    Full text link
    The \emph{rational fair consensus problem} can be informally defined as follows. Consider a network of nn (selfish) \emph{rational agents}, each of them initially supporting a \emph{color} chosen from a finite set Σ \Sigma. The goal is to design a protocol that leads the network to a stable monochromatic configuration (i.e. a consensus) such that the probability that the winning color is cc is equal to the fraction of the agents that initially support cc, for any cΣc \in \Sigma. Furthermore, this fairness property must be guaranteed (with high probability) even in presence of any fixed \emph{coalition} of rational agents that may deviate from the protocol in order to increase the winning probability of their supported colors. A protocol having this property, in presence of coalitions of size at most tt, is said to be a \emph{whp\,-tt-strong equilibrium}. We investigate, for the first time, the rational fair consensus problem in the GOSSIP communication model where, at every round, every agent can actively contact at most one neighbor via a \emph{push//pull} operation. We provide a randomized GOSSIP protocol that, starting from any initial color configuration of the complete graph, achieves rational fair consensus within O(logn)O(\log n) rounds using messages of O(log2n)O(\log^2n) size, w.h.p. More in details, we prove that our protocol is a whp\,-tt-strong equilibrium for any t=o(n/logn)t = o(n/\log n) and, moreover, it tolerates worst-case permanent faults provided that the number of non-faulty agents is Ω(n)\Omega(n). As far as we know, our protocol is the first solution which avoids any all-to-all communication, thus resulting in o(n2)o(n^2) message complexity.Comment: Accepted at IPDPS'1

    Compositional competitiveness for distributed algorithms

    Full text link
    We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al., which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.Comment: 33 pages, 2 figures; full version of STOC 96 paper titled "Modular competitiveness for distributed algorithms.

    Geographic Gossip: Efficient Averaging for Sensor Networks

    Full text link
    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of nn and n\sqrt{n} respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy ϵ\epsilon using O(n1.5lognlogϵ1)O(\frac{n^{1.5}}{\sqrt{\log n}} \log \epsilon^{-1}) radio transmissions, which yields a nlogn\sqrt{\frac{n}{\log n}} factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.Comment: To appear, IEEE Transactions on Signal Processin
    corecore