11 research outputs found

    Failure detectors as type boosters

    Get PDF
    The power of an object type T can be measured as the maximum number n of processes that can solve consensus using only objects of T and registers. This number, denoted cons(T), is called the consensus power of T. This paper addresses the question of the weakest failure detector to solve consensus among a number k > n of processes that communicate using shared objects of a type T with consensus power n. In other words, we seek for a failure detector that is sufficient and necessary to "boost” the consensus power of a type T from n to k. It was shown in Neiger (Proceedings of the 14th annual ACM symposium on principles of distributed computing (PODC), pp. 100-109, 1995) that a certain failure detector, denoted Ω n , is sufficient to boost the power of a type T from n to k, and it was conjectured that Ω n was also necessary. In this paper, we prove this conjecture for one-shot deterministic types. We first show that, for any one-shot deterministic type T with cons(T) ≤ n, Ω n is necessary to boost the power of T from n to n+1. Then we go a step further and show that Ω n is also the weakest to boost the power of (n+1)-ported one-shot deterministic types from n to any k > n. Our result generalizes, in a precise sense, the result of the weakest failure detector to solve consensus in asynchronous message-passing systems (Chandra etal. in J ACM 43(4):685-722, 1996). As a corollary, we show that Ω t is the weakest failure detector to boost the resilience level of a distributed shared memory system, i.e., to solve consensus among n > t processes using (t − 1)-resilient objects of consensus power

    The CHT Play

    Get PDF
    This note gives a high level and informal account of the necessary part of the proof that Ω is the weakest failure detector to implement consensus with a majority of correct processes. The proof originally appeared in a widely cited but rarely understood paper by Chandra, Hadzilacos and Toueg. We describe it here as a play in five acts, preceded by a prologue and followed by an epilogue

    À la recherche du détecteur de fautes minimal pour le k-accord

    No full text
    Ce rapport de stage de master recherche en informatique présente des travaux effectués dans le domaine des systèmes distribués asynchrones. Il s'agit d'études visant à déterminer le plus faible détecteur de faute permettant de résoudre le problème du k-accord dans un système distribué asynchrone où les processus communiquent par passage de messages et où un nombre arbitrairement grand de défaillances peuvent survenir

    Looking for the Weakest Failure Detector for kk-Set Agreement in Message-passing Systems: Is Πk\Pi_k the End of the Road?

    Get PDF
    In the kk-set agreement problem, each process (in a set of nn processes) proposes a value and has to decide a proposed value in such a way that at most kk different values are decided. While this problem can easily be solved in asynchronous systems prone to tt process crashes when k>tk>t, it cannot be solved when ktk\leq t. Since several years, the failure detector-based approach has been investigated to circumvent this impossibility. While the weakest failure detector class to solve the kk-set agreement problem in read/write shared-memory systems has recently been discovered (PODC 2009), the situation is different in message-passing systems where the weakest failure detector classes are known only for the extreme cases k=1k=1 (consensus) and k=n1k=n-1 (set agreement). This paper introduces a candidate for the general case. It presents a new failure detector class, denoted Πk\Pi_k, and shows Π1=Σ×Ω\Pi_1=\Sigma\times \Omega (the weakest class for k=1k=1), and Πn1=L\Pi_{n-1}={\cal L} (the weakest class for k=n1k=n-1). Then, the paper investigates the structure of Πk\Pi_k and shows it is the combination of two failures detector classes denoted Σk\Sigma_k and Ωk\Omega_k (that generalize the previous ``quorums'' and ``eventual leaders'' failure detectors classes). Finally, the paper proves that Σk\Sigma_k is a necessary requirement (as far as information on failure is concerned) to solve the kk-set agreement problem in message-passing systems. The paper presents also a Πn1\Pi_{n-1}-based algorithm that solves the (n1)(n-1)-set agreement problem. This algorithm provides us with a new algorithmic insight on the way the (n1)(n-1)-set agreeement problem can be solved in asynchronous message-passing systems (insight from the point of view of the non-partitioning constraint defined by Σn1\Sigma_{n-1})

    Failure Detection Lower Bounds on Registers and Consensus (Preliminary Version)

    Get PDF
    This paper addresses the problem of determining the weakest failure detector to implement consensus in a message passing system when t out of n processes can crash (including when n/2 =< t < n-1), by addressing the problem of determining the weakest failure detector to implement a register. We complement and, in a precise sense, generalise previous results on the implementability of consensus and registers in a message passing model (augmented with the failure detector abstraction)

    Chasing the Weakest Failure Detector for k-Set Agreement in Message-passing Systems

    Get PDF
    This paper continues our quest for the weakest failure detector which allows the k-set agreement problem to be solved in asynchronous message-passing systems prone to any number of process failures. It has two main contributions which (we hope) will be instrumental to complete this quest. The first contribution is a new failure detector (denoted ∏∑x,y). This failure detector has several noteworthy properties. (a) It is stronger than ∑x which has been shown to be necessary. (b) It is equivalent to the pair (∑, Ω) when x = y = 1 (from which it follows that ∏∑1,1 is optimal to solve consensus). (c) It is equivalent to the pair (∑n−1, Ωn−1) when x = y = n − 1 (from which it follows that ∏∑n−1, n−1) is optimal for (n − 1)-set agreement). (d) It is strictly weaker than the pair (∑x,Ωy) (which has been investigated in previous works) for the pairs (x, y) such that 1 < y < x < n. (e) It is operational: the paper presents a ∏∑x,y-based algorithm that solves k-set agreement for k ⩾ xy. The second contribution of the paper is a proof that, for 1 < k < n − 1, the eventual leaders failure detector k (which eventually provides each process with the same set of k process identities, this set including at least one correct process) is not necessary to solve k-set agreement problem. More precisely, the paper shows that the weakest failure detector for k-set agreement and k cannot be compared

    Failure detectors as type boosters

    Get PDF
    The power of an object type T can be measured as the maximum number n of processes that can solve consensus using only objects of T and registers. This number, denoted cons(T), is called the consensus power of T. This paper addresses the question of the weakest failure detector to solve consensus among a number k > n of processes that communicate using shared objects of a type T with consensus power n. In other words, we seek for a failure detector that is sufficient and necessary to "boost" the consensus power of a type T from n to k. It was shown in Neiger (Proceedings of the 14th annual ACM symposium on principles of distributed computing (PODC), pp. 100-109, 1995) that a certain failure detector, denoted Omega (n) , is sufficient to boost the power of a type T from n to k, and it was conjectured that Omega (n) was also necessary. In this paper, we prove this conjecture for one-shot deterministic types. We first show that, for any one-shot deterministic type T with cons(T) n. Our result generalizes, in a precise sense, the result of the weakest failure detector to solve consensus in asynchronous message-passing systems (Chandra et al. in J ACM 43(4):685-722, 1996). As a corollary, we show that Omega (t) is the weakest failure detector to boost the resilience level of a distributed shared memory system, i.e., to solve consensus among n > t processes using (t - 1)-resilient objects of consensus power t

    The Failure Detector Abstraction

    Get PDF
    A failure detector is a fundamental abstraction in distributed computing. This paper surveys this abstraction through two dimensions. First we study failure detectors as building blocks to simplify the design of reliable distributed algorithms. In particular, we illustrate how failure detectors can factor out timing assumptions to detect failures in distributed agreement algorithms. Second, we study failure detectors as computability benchmarks. That is, we survey the weakest failure detector question and illustrate how failure detectors can be used to classify problems. We also highlight some limitations of the failure detector abstraction along each of the dimensions

    Fairneß, Randomisierung und Konspiration in verteilten Algorithmen

    Get PDF
    Fairneß (d.h. faire Konfliktlösung), Randomisierung (d.h. Münzwürfe) und partielle Synchronie sind verschiedene Konzepte, die häufig zur Lösung zentraler Synchronisations- und Koordinationsprobleme in verteilten Systemen verwendet werden. Beispiele für solche Probleme sind das Problem des wechselseitigen Ausschlusses (kurz: Mutex-Problem) sowie das Konsens-Problem. Für einige solcher Probleme wurde bewiesen, daß ohne die oben genannten Konzepte keine Lösung für das betrachtete Problem existiert. Unmöglichkeitsresultate dieser Art verbessern unser Verständnis der Wirkungsweise verteilter Algorithmen sowie das Verständnis des Trade-offs zwischen einem leicht analysierbaren und einem ausdrucksstarken Modell für verteiltes Rechnen. In dieser Arbeit stellen wir zwei neue Unmöglichkeitsresultate vor. Darüberhinaus beleuchten wir ihre Hintergründe. Wir betrachten dabei Modelle, die Randomisierung einbeziehen, da bisher wenig über die Grenzen der Ausdrucksstärke von Randomisierung bekannt ist. Mit einer Lösung eines Problems durch Randomisierung meinen wir, daß das betrachtete Problem mit Wahrscheinlichkeit 1 gelöst wird. Im ersten Teil der Arbeit untersuchen wir die Beziehung von Fairneß und Randomisierung. Einerseits ist bekannt, daß einige Probleme (z.B. das Konsens- Problem) durch Randomisierung nicht aber durch Fairneß lösbar sind. Wir zeigen nun, daß es andererseits auch Probleme gibt (nämlich das Mutex-Problem), die durch Fairneß, nicht aber durch Randomisierung lösbar sind. Daraus folgt, daß Fairneß nicht durch Randomisierung implementiert werden kann. Im zweiten Teil der Arbeit verwenden wir ein Modell, das Fairneß und Randomisierung vereint. Ein solches Modell ist relativ ausdrucksstark: Es erlaubt Lösungen für das Mutex-Problem, das Konsens-Problem, sowie eine Lösung für das allgemeine Mutex-Problem. Beim allgemeinen Mutex-Problem (auch bekannt als Problem der speisenden Philosophen) ist eine Nachbarschaftsrelation auf den Agenten gegeben und ein Algorithmus gesucht, der das Mutex-Problem für jedes Paar von Nachbarn simultan löst. Schließlich betrachten wir das ausfalltolerante allgemeine Mutex-Problem -- eine Variante des allgemeinen Mutex-Problems, bei der Agenten ausfallen können. Wir zeigen, daß sogar die Verbindung von Fairneß und Randomisierung nicht genügt, um eine Lösung für das ausfalltolerante allgemeine Mutex-Problem zu konstruieren. Ein Hintergrund für dieses Unmöglichkeitsresultat ist ein unerwünschtes Phänomen, für das in der Literatur der Begriff Konspiration geprägt wurde. Konspiration wurde bisher nicht adäquat charakterisiert. Wir charakterisieren Konspiration auf der Grundlage nicht-sequentieller Abläufe. Desweiteren zeigen wir, daß Konspiration für eine große Klasse von Systemen durch die zusätzliche Annahme von partieller Synchronie verhindert werden kann, d.h. ein konspirationsbehaftetes System kann zu einem randomisierten System verfeinert werden, das unter Fairneß und partieller Synchronie mit Wahrscheinlichkeit 1 konspirationsfrei ist. Partielle Synchronie fordert, daß alle relativen Geschwindigkeiten im System durch eine Konstante beschränkt sind, die jedoch den Agenten nicht bekannt ist. Die Darstellung der Unmöglichkeitsresultate und die Charakterisierung von Konspiration wird erst durch die Verwendung nicht-sequentieller Abläufe möglich. Ein nicht-sequentieller Ablauf repräsentiert im Gegensatz zu einem sequentiellen Ablauf kausale Ordnung und nicht zeitliche Ordnung von Ereignissen. Wir entwickeln in dieser Arbeit eine nicht-sequentielle Semantik für randomisierte verteilte Algorithmen, da es bisher keine in der Literatur gibt. In dieser Semantik wird kausale Unabhängigkeit durch stochastische Unabhängigkeit widergespiegelt.Concepts such as fairness (i.e., fair conflict resolution), randomization (i.e., coin flips), and partial synchrony are frequently used to solve fundamental synchronization- and coordination-problems in distributed systems such as the mutual exclusion problem (mutex problem for short) and the consensus problem. For some problems it is proven that, without such concepts, no solution to the particular problem exists. Impossibilty results of that kind improve our understanding of the way distributed algorithms work. They also improve our understanding of the trade-off between a tractable model and a powerful model of distributed computation. In this thesis, we prove two new impossibility results and we investigate their reasons. We are in particular concerned with models for randomized distributed algorithms since little is yet known about the limitations of randomization with respect to the solvability of problems in distributed systems. By a solution through randomization we mean that the problem under consideration is solved with probability 1. In the first part of the thesis, we investigate the relationship between fairness and randomization. On the one hand, it is known that to some problems (e.g. to the consensus problem), randomization admits a solution where fairness does not admit a solution. On the other hand, we show that there are problems (viz. the mutex problem) to which randomization does not admit a solution where fairness does admit a solution. These results imply that fairness cannot be implemented by coin flips. In the second part of the thesis, we consider a model which combines fairness and randomization. Such a model is quite powerful, allowing solutions to the mutex problem, the consensus problem, and a solution to the generalized mutex problem. In the generalized mutex problem (a.k.a. the dining philosophers problem), a neighborhood relation is given and mutual exclusion must be achieved for each pair of neighbors. We finally consider the crash-tolerant generalized mutex problem where every hungry agent eventually becomes critical provided that neither itself nor one of its neighbors crashes. We prove that even the combination of fairness and randomization does not admit a solution to the crash-tolerant generalized mutex problem. We argue that the reason for this impossibility is the inherent occurrence of an undesirable phenomenon known as conspiracy. Conspiracy was not yet properly characterized. We characterize conspiracy on the basis of non-sequential runs, and we show that conspiracy can be prevented by help of the additional assumption of partial synchrony, i.e., we show that every conspiracy-prone system can be refined to a randomized system which is, with probability 1, conspiracy-free under the assumptions of partial synchrony and fairness. Partial synchrony means that each event consumes a bounded amount of time where, however, the bound is not known. We use a non-sequential semantics for distributed algorithms which is essential to some parts of the thesis. In particular, we develop a non-sequential semantics for randomized distributed algorithms since there is no such semantics in the literature. In this non-sequential semantics, causal independence is reflected by stochastic independence
    corecore