77 research outputs found

    Fast and Compact Distributed Verification and Self-Stabilization of a DFS Tree

    Full text link
    We present algorithms for distributed verification and silent-stabilization of a DFS(Depth First Search) spanning tree of a connected network. Computing and maintaining such a DFS tree is an important task, e.g., for constructing efficient routing schemes. Our algorithm improves upon previous work in various ways. Comparable previous work has space and time complexities of O(nlogΔ)O(n\log \Delta) bits per node and O(nD)O(nD) respectively, where Δ\Delta is the highest degree of a node, nn is the number of nodes and DD is the diameter of the network. In contrast, our algorithm has a space complexity of O(logn)O(\log n) bits per node, which is optimal for silent-stabilizing spanning trees and runs in O(n)O(n) time. In addition, our solution is modular since it utilizes the distributed verification algorithm as an independent subtask of the overall solution. It is possible to use the verification algorithm as a stand alone task or as a subtask in another algorithm. To demonstrate the simplicity of constructing efficient DFS algorithms using the modular approach, We also present a (non-sielnt) self-stabilizing DFS token circulation algorithm for general networks based on our silent-stabilizing DFS tree. The complexities of this token circulation algorithm are comparable to the known ones

    Fair and Reliable Self-Stabilizing Communication

    No full text
    12 pages -- Edition: World Scientific Version 2: soumission ArXivInternational audienceWe assume a link-register communication model under read/write atomicity, where every process can read from but cannot write into its neighbours' registers. The paper presents two self-stabilizing protocols for basic fair and reliable link communication primitives. The rst primitive guarantees that any process writes a new value in its register(s) only after all its neighbours have read the previous value, whatever the initial scheduling of processes' actions. The second primitive implements a weak rendezvous communication mechanism by using an alternating bit protocol: whenever a process consecutively writes n values (possibly the same ones) in a register, each neighbour is guaranteed to read each value from the register at least once. Both protocols are self-stabilizing and run in asynchronous arbitrary networks. The goal of the paper is in handling each primitive by a separate procedure, which can be used as a black box in more involved self-stabilizing protocols

    Robust network computation

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 91-98).In this thesis, we present various models of distributed computation and algorithms for these models. The underlying theme is to come up with fast algorithms that can tolerate faults in the underlying network. We begin with the classical message-passing model of computation, surveying many known results. We give a new, universally optimal, edge-biconnectivity algorithm for the classical model. We also give a near-optimal sub-linear algorithm for identifying bridges, when all nodes are activated simultaneously. After discussing some ways in which the classical model is unrealistic, we survey known techniques for adapting the classical model to the real world. We describe a new balancing model of computation. The intent is that algorithms in this model should be automatically fault-tolerant. Existing algorithms that can be expressed in this model are discussed, including ones for clustering, maximum flow, and synchronization. We discuss the use of agents in our model, and give new agent-based algorithms for census and biconnectivity. Inspired by the balancing model, we look at two problems in more depth.(cont.) First, we give matching upper and lower bounds on the time complexity of the census algorithm, and we show how the census algorithm can be used to name nodes uniquely in a faulty network. Second, we consider using discrete harmonic functions as a computational tool. These functions are a natural exemplar of the balancing model. We prove new results concerning the stability and convergence of discrete harmonic functions, and describe a method which we call Eulerization for speeding up convergence.by David Pritchard.M.Eng

    Fast wait-free symmetry breaking in distributed systems

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1993.Includes bibliographical references (leaves 32-33).by Mark Anthony Shawn Smith.M.S

    Trade-off between Time, Space, and Workload: the case of the Self-stabilizing Unison

    Full text link
    We present a self-stabilizing algorithm for the (asynchronous) unison problem which achieves an efficient trade-off between time, workload, and space in a weak model. Precisely, our algorithm is defined in the atomic-state model and works in anonymous networks in which even local ports are unlabeled. It makes no assumption on the daemon and thus stabilizes under the weakest one: the distributed unfair daemon. In a nn-node network of diameter DD and assuming a period B2D+2B \geq 2D+2, our algorithm only requires O(logB)O(\log B) bits per node to achieve full polynomiality as it stabilizes in at most 2D22D-2 rounds and O(min(n2B,n3))O(\min(n^2B, n^3)) moves. In particular and to the best of our knowledge, it is the first self-stabilizing unison for arbitrary anonymous networks achieving an asymptotically optimal stabilization time in rounds using a bounded memory at each node. Finally, we show that our solution allows to efficiently simulate synchronous self-stabilizing algorithms in an asynchronous environment. This provides a new state-of-the-art algorithm solving both the leader election and the spanning tree construction problem in any identified connected network which, to the best of our knowledge, beat all existing solutions of the literature.Comment: arXiv admin note: substantial text overlap with arXiv:2307.0663

    Snap-Stabilization in Message-Passing Systems

    Get PDF
    In this paper, we tackle the open problem of snap-stabilization in message-passing systems. Snap-stabilization is a nice approach to design protocols that withstand transient faults. Compared to the well-known self-stabilizing approach, snap-stabilization guarantees that the effect of faults is contained immediately after faults cease to occur. Our contribution is twofold: we show that (1) snap-stabilization is impossible for a wide class of problems if we consider networks with finite yet unbounded channel capacity; (2) snap-stabilization becomes possible in the same setting if we assume bounded-capacity channels. We propose three snap-stabilizing protocols working in fully-connected networks. Our work opens exciting new research perspectives, as it enables the snap-stabilizing paradigm to be implemented in actual networks

    Asynchronous neighborhood task synchronization

    Full text link
    Faults are likely to occur in distributed systems. The motivation for designing self-stabilizing system is to be able to automatically recover from a faulty state. As per Dijkstra\u27s definition, a system is self-stabilizing if it converges to a desired state from an arbitrary state in a finite number of steps. The paradigm of self-stabilization is considered to be the most unified approach to designing fault-tolerant systems. Any type of faults, e.g., transient, process crashes and restart, link failures and recoveries, and byzantine faults, can be handled by a self-stabilizing system; Many applications in distributed systems involve multiple phases. Solving these applications require some degree of synchronization of phases. In this thesis research, we introduce a new problem, called asynchronous neighborhood task synchronization ( NTS ). In this problem, processes execute infinite instances of tasks, where a task consists of a set of steps. There are several requirements for this problem. Simultaneous execution of steps by the neighbors is allowed only if the steps are different. Every neighborhood is synchronized in the sense that all neighboring processes execute the same instance of a task. Although the NTS problem is applicable in nonfaulty environments, it is more challenging to solve this problem considering various types of faults. In this research, we will present a self-stabilizing solution to the NTS problem. The proposed solution is space optimal, fault containing, fully localized, and fully distributed. One of the most desirable properties of our algorithm is that it works under any (including unfair) daemon. We will discuss various applications of the NTS problem
    corecore