24 research outputs found

    Fast and compact self-stabilizing verification, computation, and fault detection of an MST

    Get PDF
    This paper demonstrates the usefulness of distributed local verification of proofs, as a tool for the design of self-stabilizing algorithms.In particular, it introduces a somewhat generalized notion of distributed local proofs, and utilizes it for improving the time complexity significantly, while maintaining space optimality. As a result, we show that optimizing the memory size carries at most a small cost in terms of time, in the context of Minimum Spanning Tree (MST). That is, we present algorithms that are both time and space efficient for both constructing an MST and for verifying it.This involves several parts that may be considered contributions in themselves.First, we generalize the notion of local proofs, trading off the time complexity for memory efficiency. This adds a dimension to the study of distributed local proofs, which has been gaining attention recently. Specifically, we design a (self-stabilizing) proof labeling scheme which is memory optimal (i.e., O(logn)O(\log n) bits per node), and whose time complexity is O(log2n)O(\log ^2 n) in synchronous networks, or O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, where Δ\Delta is the maximum degree of nodes. This answers an open problem posed by Awerbuch and Varghese (FOCS 1991). We also show that Ω(logn)\Omega(\log n) time is necessary, even in synchronous networks. Another property is that if ff faults occurred, then, within the requireddetection time above, they are detected by some node in the O(flogn)O(f\log n) locality of each of the faults.Second, we show how to enhance a known transformer that makes input/output algorithms self-stabilizing. It now takes as input an efficient construction algorithm and an efficient self-stabilizing proof labeling scheme, and produces an efficient self-stabilizing algorithm. When used for MST, the transformer produces a memory optimal self-stabilizing algorithm, whose time complexity, namely, O(n)O(n), is significantly better even than that of previous algorithms. (The time complexity of previous MST algorithms that used Ω(log2n)\Omega(\log^2 n) memory bits per node was O(n2)O(n^2), and the time for optimal space algorithms was O(nE)O(n|E|).) Inherited from our proof labelling scheme, our self-stabilising MST construction algorithm also has the following two properties: (1) if faults occur after the construction ended, then they are detected by some nodes within O(log2n)O(\log ^2 n) time in synchronous networks, or within O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, and (2) if ff faults occurred, then, within the required detection time above, they are detected within the O(flogn)O(f\log n) locality of each of the faults. We also show how to improve the above two properties, at the expense of some increase in the memory

    Fair and Reliable Self-Stabilizing Communication

    No full text
    12 pages -- Edition: World Scientific Version 2: soumission ArXivInternational audienceWe assume a link-register communication model under read/write atomicity, where every process can read from but cannot write into its neighbours' registers. The paper presents two self-stabilizing protocols for basic fair and reliable link communication primitives. The rst primitive guarantees that any process writes a new value in its register(s) only after all its neighbours have read the previous value, whatever the initial scheduling of processes' actions. The second primitive implements a weak rendezvous communication mechanism by using an alternating bit protocol: whenever a process consecutively writes n values (possibly the same ones) in a register, each neighbour is guaranteed to read each value from the register at least once. Both protocols are self-stabilizing and run in asynchronous arbitrary networks. The goal of the paper is in handling each primitive by a separate procedure, which can be used as a black box in more involved self-stabilizing protocols

    Making Self-Stabilizing Algorithms for Any Locally Greedy Problem

    Get PDF
    Self-stabilizing algorithms are a way to deal with network dynamicity, as it will update itself after a network change (addition or removal of nodes or edges), as long as changes are not frequent. We propose an automatic transformation of synchronous distributed algorithms that solve locally greedy and mendable problems into self-stabilizing algorithms in anonymous networks. Mendable problems are a generalization of greedy problems where any partial solution may be transformed -instead of completed- into a global solution: every time we extend the partial solution, we are allowed to change the previous partial solution up to a given distance. Locally here means that to extend a solution for a node, we need to look at a constant distance from it. In order to do this, we propose the first explicit self-stabilizing algorithm computing a (k,k-1)-ruling set (i.e. a "maximal independent set at distance k"). By combining this technique multiple times, we compute a distance-K coloring of the graph. With this coloring we can finally simulate Local model algorithms running in a constant number of rounds, using the colors as unique identifiers. Our algorithms work under the Gouda daemon, similar to the probabilistic daemon: if an event should eventually happen, it will occur

    Robust network computation

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 91-98).In this thesis, we present various models of distributed computation and algorithms for these models. The underlying theme is to come up with fast algorithms that can tolerate faults in the underlying network. We begin with the classical message-passing model of computation, surveying many known results. We give a new, universally optimal, edge-biconnectivity algorithm for the classical model. We also give a near-optimal sub-linear algorithm for identifying bridges, when all nodes are activated simultaneously. After discussing some ways in which the classical model is unrealistic, we survey known techniques for adapting the classical model to the real world. We describe a new balancing model of computation. The intent is that algorithms in this model should be automatically fault-tolerant. Existing algorithms that can be expressed in this model are discussed, including ones for clustering, maximum flow, and synchronization. We discuss the use of agents in our model, and give new agent-based algorithms for census and biconnectivity. Inspired by the balancing model, we look at two problems in more depth.(cont.) First, we give matching upper and lower bounds on the time complexity of the census algorithm, and we show how the census algorithm can be used to name nodes uniquely in a faulty network. Second, we consider using discrete harmonic functions as a computational tool. These functions are a natural exemplar of the balancing model. We prove new results concerning the stability and convergence of discrete harmonic functions, and describe a method which we call Eulerization for speeding up convergence.by David Pritchard.M.Eng

    Metastability-containing circuits, parallel distance problems, and terrain guarding

    Get PDF
    We study three problems. The first is the phenomenon of metastability in digital circuits. This is a state of bistable storage elements, such as registers, that is neither logical 0 nor 1 and breaks the abstraction of Boolean logic. We propose a time- and value-discrete model for metastability in digital circuits and show that it reflects relevant physical properties. Further, we propose the fundamentally new approach of using logical masking to perform meaningful computations despite the presence of metastable upsets and analyze what functions can be computed in our model. Additionally, we show that circuits with masking registers grow computationally more powerful with each available clock cycle. The second topic are parallel algorithms, based on an algebraic abstraction of the Moore-Bellman-Ford algorithm, for solving various distance problems. Our focus are distance approximations that obey the triangle inequality while at the same time achieving polylogarithmic depth and low work. Finally, we study the continuous Terrain Guarding Problem. We show that it has a rational discretization with a quadratic number of guard candidates, establish its membership in NP and the existence of a PTAS, and present an efficient implementation of a solver.Wir betrachten drei Probleme, zunächst das Phänomen von Metastabilität in digitalen Schaltungen. Dabei geht es um einen Zustand in bistabilen Speicherelementen, z.B. Registern, welcher weder logisch 0 noch 1 entspricht und die Abstraktion Boolescher Logik unterwandert. Wir präsentieren ein zeit- und wertdiskretes Modell für Metastabilität in digitalen Schaltungen und zeigen, dass es relevante physikalische Eigenschaften abbildet. Des Weiteren präsentieren wir den grundlegend neuen Ansatz, trotz auftretender Metastabilität mit Hilfe von logischem Maskieren sinnvolle Berechnungen durchzuführen und bestimmen, welche Funktionen in unserem Modell berechenbar sind. Darüber hinaus zeigen wir, dass durch Maskingregister in zusätzlichen Taktzyklen mehr Funktionen berechenbar werden. Das zweite Thema sind parallele Algorithmen die, basierend auf einer Algebraisierung des Moore-Bellman-Ford-Algorithmus, diverse Distanzprobleme lösen. Der Fokus liegt auf Distanzapproximationen unter Einhaltung der Dreiecksungleichung bei polylogarithmischer Tiefe und niedriger Arbeit. Abschließend betrachten wir das kontinuierliche Terrain Guarding Problem. Wir zeigen, dass es eine rationale Diskretisierung mit einer quadratischen Anzahl von Wächterpositionen erlaubt, folgern dass es in NP liegt und ein PTAS existiert und präsentieren eine effiziente Implementierung, die es löst

    Asynchronous neighborhood task synchronization

    Full text link
    Faults are likely to occur in distributed systems. The motivation for designing self-stabilizing system is to be able to automatically recover from a faulty state. As per Dijkstra\u27s definition, a system is self-stabilizing if it converges to a desired state from an arbitrary state in a finite number of steps. The paradigm of self-stabilization is considered to be the most unified approach to designing fault-tolerant systems. Any type of faults, e.g., transient, process crashes and restart, link failures and recoveries, and byzantine faults, can be handled by a self-stabilizing system; Many applications in distributed systems involve multiple phases. Solving these applications require some degree of synchronization of phases. In this thesis research, we introduce a new problem, called asynchronous neighborhood task synchronization ( NTS ). In this problem, processes execute infinite instances of tasks, where a task consists of a set of steps. There are several requirements for this problem. Simultaneous execution of steps by the neighbors is allowed only if the steps are different. Every neighborhood is synchronized in the sense that all neighboring processes execute the same instance of a task. Although the NTS problem is applicable in nonfaulty environments, it is more challenging to solve this problem considering various types of faults. In this research, we will present a self-stabilizing solution to the NTS problem. The proposed solution is space optimal, fault containing, fully localized, and fully distributed. One of the most desirable properties of our algorithm is that it works under any (including unfair) daemon. We will discuss various applications of the NTS problem

    Notes on Theory of Distributed Systems

    Full text link
    Notes for the Yale course CPSC 465/565 Theory of Distributed Systems

    Distributed stabilizing data structures

    Full text link
    Distributed algorithms aim to achieve better performance than sequential algorithms in terms of time complexity (or asymptotic time complexity) while keeping or lowering the memory requirement (space complexity) in a node. (In sequential algorithms, the memory requirement is the memory requirement of the algorithm itself.); Self-stabilizing distributed algorithms aim to achieve a comparable performance to non-stabilizing distributed algorithms when transient faults or arbitrary initialization cause the system to enter a state where a non-stabilizing algorithm cannot continue to properly perform its task; Transient faults can affect an existing data structure and alter its data content. As a result, the data structure may lose its properties, and the operations defined over the data structure will have unpredictable and undesirable results, making the data structure unusable; We present several self or snap-stabilizing algorithms for particular data structures; We propose an optimal self-stabilizing distributed algorithm for simultaneously activating non-adjacent processes on an oriented chain (Algorithm SSDS ). We use Algorithm SSDS to accomplish two tasks: local mutual exclusion and line sorting. We propose two uniform, self-stabilizing, deterministic protocols on oriented chains: a time and space optimal solution to the local mutual exclusion problem (Algorithm LMEC ), and a space and (asymptotic) time optimal solution to the distributed sorting problem (Algorithm SORTc ); We extend Algorithm SSDS to an asynchronous oriented ring with a distinguished node with some minor modifications, and we obtain general self-stabilization for simultaneously activated non-adjacent processes in an oriented ring with a distinguished process (Algorithm SSDSR ). We use Algorithm SSDSR to accomplish two tasks: local resource allocation and ring sorting. We propose two uniform, self-stabilizing, deterministic protocols on oriented rings: a time and space optimal solution to the local resource allocation problem (Algorithm LRAR ), and a space and (asymptotic) time optimal solution to the distributed sorting problem (Algorithm SORTr ); We extend Algorithm SSDS to an asynchronous rooted tree, and we obtain general self-stabilization for simultaneously activated non-adjacent processes in a rooted tree (Algorithm SSDST ). We then give two applications of Algorithm SSDST : a time and space optimal solution to the local mutual exclusion problem (Algorithm LMET ) and a space and (asymptotically) time optimal solution to the min heap problem (Algorithm HEAP ); In proving the time complexity of sorting, we introduce the notion of pseudo-time, similar to logical time introduced by Lamport; We present the first snap-stabilizing distributed binary search tree (BST) algorithm. The proposed algorithm uses a heap algorithm (Algorithm Heap) as a preprocessing step. This is also the first snap-stabilizing distributed solution to the heap problem
    corecore