2,621 research outputs found

    Fast Deterministic Consensus in a Noisy Environment

    Full text link
    It is well known that the consensus problem cannot be solved deterministically in an asynchronous environment, but that randomized solutions are possible. We propose a new model, called noisy scheduling, in which an adversarial schedule is perturbed randomly, and show that in this model randomness in the environment can substitute for randomness in the algorithm. In particular, we show that a simplified, deterministic version of Chandra's wait-free shared-memory consensus algorithm (PODC, 1996, pp. 166-175) solves consensus in time at most logarithmic in the number of active processes. The proof of termination is based on showing that a race between independent delayed renewal processes produces a winner quickly. In addition, we show that the protocol finishes in constant time using quantum and priority-based scheduling on a uniprocessor, suggesting that it is robust against the choice of model over a wide range.Comment: Typographical errors fixe

    Randomized protocols for asynchronous consensus

    Full text link
    The famous Fischer, Lynch, and Paterson impossibility proof shows that it is impossible to solve the consensus problem in a natural model of an asynchronous distributed system if even a single process can fail. Since its publication, two decades of work on fault-tolerant asynchronous consensus algorithms have evaded this impossibility result by using extended models that provide (a) randomization, (b) additional timing assumptions, (c) failure detectors, or (d) stronger synchronization mechanisms than are available in the basic model. Concentrating on the first of these approaches, we illustrate the history and structure of randomized asynchronous consensus protocols by giving detailed descriptions of several such protocols.Comment: 29 pages; survey paper written for PODC 20th anniversary issue of Distributed Computin

    Compositional competitiveness for distributed algorithms

    Full text link
    We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al., which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.Comment: 33 pages, 2 figures; full version of STOC 96 paper titled "Modular competitiveness for distributed algorithms.

    Consensus with Max Registers

    Get PDF
    We consider the problem of implementing randomized wait-free consensus from max registers under the assumption of an oblivious adversary. We show that max registers solve m-valued consensus for arbitrary m in expected O(log^* n) steps per process, beating the Omega(log m/log log m) lower bound for ordinary registers when m is large and the best previously known O(log log n) upper bound when m is small. A simple max-register implementation based on double-collect snapshots translates this result into an O(n log n) expected step implementation of m-valued consensus from n single-writer registers, improving on the best previously-known bound of O(n log^2 n) for single-writer registers

    Space-Optimal Majority in Population Protocols

    Full text link
    Population protocols are a model of distributed computing, in which nn agents with limited local state interact randomly, and cooperate to collectively compute global predicates. An extensive series of papers, across different communities, has examined the computability and complexity characteristics of this model. Majority, or consensus, is a central task, in which agents need to collectively reach a decision as to which one of two states AA or BB had a higher initial count. Two complexity metrics are important: the time that a protocol requires to stabilize to an output decision, and the state space size that each agent requires. It is known that majority requires Ω(loglogn)\Omega(\log \log n) states per agent to allow for poly-logarithmic time stabilization, and that O(log2n)O(\log^2 n) states are sufficient. Thus, there is an exponential gap between the upper and lower bounds. We address this question. We provide a new lower bound of Ω(logn)\Omega(\log n) states for any protocol which stabilizes in O(n1c)O( n^{1-c} ) time, for any c>0c > 0 constant. This result is conditional on basic monotonicity and output assumptions, satisfied by all known protocols. Technically, it represents a significant departure from previous lower bounds. Instead of relying on dense configurations, we introduce a new surgery technique to construct executions which contradict the correctness of algorithms that stabilize too fast. Subsequently, our lower bound applies to general initial configurations. We give an algorithm for majority which uses O(logn)O(\log n) states, and stabilizes in O(log2n)O(\log^2 n) time. Central to the algorithm is a new leaderless phase clock, which allows nodes to synchronize in phases of Θ(nlogn)\Theta(n \log{n}) consecutive interactions using O(logn)O(\log n) states per node. We also employ our phase clock to build a leader election algorithm with O(logn)O(\log n ) states, which stabilizes in O(log2n)O(\log^2 n) time

    Efficient size estimation and impossibility of termination in uniform dense population protocols

    Full text link
    We study uniform population protocols: networks of anonymous agents whose pairwise interactions are chosen at random, where each agent uses an identical transition algorithm that does not depend on the population size nn. Many existing polylog(n)(n) time protocols for leader election and majority computation are nonuniform: to operate correctly, they require all agents to be initialized with an approximate estimate of nn (specifically, the exact value logn\lfloor \log n \rfloor). Our first main result is a uniform protocol for calculating log(n)±O(1)\log(n) \pm O(1) with high probability in O(log2n)O(\log^2 n) time and O(log4n)O(\log^4 n) states (O(loglogn)O(\log \log n) bits of memory). The protocol is converging but not terminating: it does not signal when the estimate is close to the true value of logn\log n. If it could be made terminating, this would allow composition with protocols, such as those for leader election or majority, that require a size estimate initially, to make them uniform (though with a small probability of failure). We do show how our main protocol can be indirectly composed with others in a simple and elegant way, based on the leaderless phase clock, demonstrating that those protocols can in fact be made uniform. However, our second main result implies that the protocol cannot be made terminating, a consequence of a much stronger result: a uniform protocol for any task requiring more than constant time cannot be terminating even with probability bounded above 0, if infinitely many initial configurations are dense: any state present initially occupies Ω(n)\Omega(n) agents. (In particular, no leader is allowed.) Crucially, the result holds no matter the memory or time permitted. Finally, we show that with an initial leader, our size-estimation protocol can be made terminating with high probability, with the same asymptotic time and space bounds.Comment: Using leaderless phase cloc

    Fault-tolerant routing in peer-to-peer systems

    Full text link
    We consider the problem of designing an overlay network and routing mechanism that permits finding resources efficiently in a peer-to-peer system. We argue that many existing approaches to this problem can be modeled as the construction of a random graph embedded in a metric space whose points represent resource identifiers, where the probability of a connection between two nodes depends only on the distance between them in the metric space. We study the performance of a peer-to-peer system where nodes are embedded at grid points in a simple metric space: a one-dimensional real line. We prove upper and lower bounds on the message complexity of locating particular resources in such a system, under a variety of assumptions about failures of either nodes or the connections between them. Our lower bounds in particular show that the use of inverse power-law distributions in routing, as suggested by Kleinberg (1999), is close to optimal. We also give efficient heuristics to dynamically maintain such a system as new nodes arrive and old nodes depart. Finally, we give experimental results that suggest promising directions for future work.Comment: Full version of PODC 2002 paper. New version corrects missing conditioning in Lemma 9 and some related details in the proof of Theorem 10, with no changes to main result
    corecore