18,362 research outputs found

    Fast and compact self-stabilizing verification, computation, and fault detection of an MST

    Get PDF
    This paper demonstrates the usefulness of distributed local verification of proofs, as a tool for the design of self-stabilizing algorithms.In particular, it introduces a somewhat generalized notion of distributed local proofs, and utilizes it for improving the time complexity significantly, while maintaining space optimality. As a result, we show that optimizing the memory size carries at most a small cost in terms of time, in the context of Minimum Spanning Tree (MST). That is, we present algorithms that are both time and space efficient for both constructing an MST and for verifying it.This involves several parts that may be considered contributions in themselves.First, we generalize the notion of local proofs, trading off the time complexity for memory efficiency. This adds a dimension to the study of distributed local proofs, which has been gaining attention recently. Specifically, we design a (self-stabilizing) proof labeling scheme which is memory optimal (i.e., O(logn)O(\log n) bits per node), and whose time complexity is O(log2n)O(\log ^2 n) in synchronous networks, or O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, where Δ\Delta is the maximum degree of nodes. This answers an open problem posed by Awerbuch and Varghese (FOCS 1991). We also show that Ω(logn)\Omega(\log n) time is necessary, even in synchronous networks. Another property is that if ff faults occurred, then, within the requireddetection time above, they are detected by some node in the O(flogn)O(f\log n) locality of each of the faults.Second, we show how to enhance a known transformer that makes input/output algorithms self-stabilizing. It now takes as input an efficient construction algorithm and an efficient self-stabilizing proof labeling scheme, and produces an efficient self-stabilizing algorithm. When used for MST, the transformer produces a memory optimal self-stabilizing algorithm, whose time complexity, namely, O(n)O(n), is significantly better even than that of previous algorithms. (The time complexity of previous MST algorithms that used Ω(log2n)\Omega(\log^2 n) memory bits per node was O(n2)O(n^2), and the time for optimal space algorithms was O(nE)O(n|E|).) Inherited from our proof labelling scheme, our self-stabilising MST construction algorithm also has the following two properties: (1) if faults occur after the construction ended, then they are detected by some nodes within O(log2n)O(\log ^2 n) time in synchronous networks, or within O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, and (2) if ff faults occurred, then, within the required detection time above, they are detected within the O(flogn)O(f\log n) locality of each of the faults. We also show how to improve the above two properties, at the expense of some increase in the memory

    Self-stabilizing Numerical Iterative Computation

    Full text link
    Many challenging tasks in sensor networks, including sensor calibration, ranking of nodes, monitoring, event region detection, collaborative filtering, collaborative signal processing, {\em etc.}, can be formulated as a problem of solving a linear system of equations. Several recent works propose different distributed algorithms for solving these problems, usually by using linear iterative numerical methods. In this work, we extend the settings of the above approaches, by adding another dimension to the problem. Specifically, we are interested in {\em self-stabilizing} algorithms, that continuously run and converge to a solution from any initial state. This aspect of the problem is highly important due to the dynamic nature of the network and the frequent changes in the measured environment. In this paper, we link together algorithms from two different domains. On the one hand, we use the rich linear algebra literature of linear iterative methods for solving systems of linear equations, which are naturally distributed with rapid convergence properties. On the other hand, we are interested in self-stabilizing algorithms, where the input to the computation is constantly changing, and we would like the algorithms to converge from any initial state. We propose a simple novel method called \syncAlg as a self-stabilizing variant of the linear iterative methods. We prove that under mild conditions the self-stabilizing algorithm converges to a desired result. We further extend these results to handle the asynchronous case. As a case study, we discuss the sensor calibration problem and provide simulation results to support the applicability of our approach

    Parameterized synthesis of self-stabilizing protocols in symmetric networks

    Get PDF
    Self-stabilization in distributed systems is a technique to guarantee convergence to a set of legitimate states without external intervention when a transient fault or bad initialization occurs. Recently, there has been a surge of efforts in designing techniques for automated synthesis of self-stabilizing algorithms that are correct by construction. Most of these techniques, however, are not parameterized, meaning that they can only synthesize a solution for a fixed and predetermined number of processes. In this paper, we report a breakthrough in parameterized synthesis of self-stabilizing algorithms in symmetric networks, including ring, line, mesh, and torus. First, we develop cutoffs that guarantee (1) closure in legitimate states, and (2) deadlock-freedom outside the legitimate states. We also develop a sufficient condition for convergence in self-stabilizing systems. Since some of our cutoffs grow with the size of the local state space of processes, scalability of the synthesis procedure is still a problem. We address this problem by introducing a novel SMT-based technique for counterexample-guided synthesis of self-stabilizing algorithms in symmetric networks. We have fully implemented our technique and successfully synthesized solutions to maximal matching, three coloring, and maximal independent set problems for ring and line topologies

    Fast and Compact Distributed Verification and Self-Stabilization of a DFS Tree

    Full text link
    We present algorithms for distributed verification and silent-stabilization of a DFS(Depth First Search) spanning tree of a connected network. Computing and maintaining such a DFS tree is an important task, e.g., for constructing efficient routing schemes. Our algorithm improves upon previous work in various ways. Comparable previous work has space and time complexities of O(nlogΔ)O(n\log \Delta) bits per node and O(nD)O(nD) respectively, where Δ\Delta is the highest degree of a node, nn is the number of nodes and DD is the diameter of the network. In contrast, our algorithm has a space complexity of O(logn)O(\log n) bits per node, which is optimal for silent-stabilizing spanning trees and runs in O(n)O(n) time. In addition, our solution is modular since it utilizes the distributed verification algorithm as an independent subtask of the overall solution. It is possible to use the verification algorithm as a stand alone task or as a subtask in another algorithm. To demonstrate the simplicity of constructing efficient DFS algorithms using the modular approach, We also present a (non-sielnt) self-stabilizing DFS token circulation algorithm for general networks based on our silent-stabilizing DFS tree. The complexities of this token circulation algorithm are comparable to the known ones

    Optimal Space Lower Bound for Deterministic Self-Stabilizing Leader Election Algorithms

    Get PDF
    Given a boolean predicate ? on labeled networks (e.g., proper coloring, leader election, etc.), a self-stabilizing algorithm for ? is a distributed algorithm that can start from any initial configuration of the network (i.e., every node has an arbitrary value assigned to each of its variables), and eventually converge to a configuration satisfying ?. It is known that leader election does not have a deterministic self-stabilizing algorithm using a constant-size register at each node, i.e., for some networks, some of their nodes must have registers whose sizes grow with the size n of the networks. On the other hand, it is also known that leader election can be solved by a deterministic self-stabilizing algorithm using registers of O(log log n) bits per node in any n-node bounded-degree network. We show that this latter space complexity is optimal. Specifically, we prove that every deterministic self-stabilizing algorithm solving leader election must use ?(log log n)-bit per node registers in some n-node networks. In addition, we show that our lower bounds go beyond leader election, and apply to all problems that cannot be solved by anonymous algorithms

    Self-Stabilizing Token Distribution with Constant-Space for Trees

    Get PDF
    Self-stabilizing and silent distributed algorithms for token distribution in rooted tree networks are given. Initially, each process of a graph holds at most l tokens. Our goal is to distribute the tokens in the whole network so that every process holds exactly k tokens. In the initial configuration, the total number of tokens in the network may not be equal to nk where n is the number of processes in the network. The root process is given the ability to create a new token or remove a token from the network. We aim to minimize the convergence time, the number of token moves, and the space complexity. A self-stabilizing token distribution algorithm that converges within O(n l) asynchronous rounds and needs Theta(nh epsilon) redundant (or unnecessary) token moves is given, where epsilon = min(k,l-k) and h is the height of the tree network. Two novel ideas to reduce the number of redundant token moves are presented. One reduces the number of redundant token moves to O(nh) without any additional costs while the other reduces the number of redundant token moves to O(n), but increases the convergence time to O(nh l). All algorithms given have constant memory at each process and each link register

    Making Self-Stabilizing Algorithms for Any Locally Greedy Problem

    Get PDF
    Self-stabilizing algorithms are a way to deal with network dynamicity, as it will update itself after a network change (addition or removal of nodes or edges), as long as changes are not frequent. We propose an automatic transformation of synchronous distributed algorithms that solve locally greedy and mendable problems into self-stabilizing algorithms in anonymous networks. Mendable problems are a generalization of greedy problems where any partial solution may be transformed -instead of completed- into a global solution: every time we extend the partial solution, we are allowed to change the previous partial solution up to a given distance. Locally here means that to extend a solution for a node, we need to look at a constant distance from it. In order to do this, we propose the first explicit self-stabilizing algorithm computing a (k,k-1)-ruling set (i.e. a "maximal independent set at distance k"). By combining this technique multiple times, we compute a distance-K coloring of the graph. With this coloring we can finally simulate Local model algorithms running in a constant number of rounds, using the colors as unique identifiers. Our algorithms work under the Gouda daemon, similar to the probabilistic daemon: if an event should eventually happen, it will occur

    A self–stabilizing algorithm for finding weighted centroid in trees

    Get PDF
    In this paper we present some modification of the Blair and Manne algorithm for finding the center of a tree network in the distributed, self-stabilizing environment. Their algorithm finds n/2 -separator of a tree. Our algorithm finds weighted centroid, which is direct generalization of the former one for tree networks with positive weights on nodes. Time complexity of both algorithms is O(n2), where n is the number of nodes in the network

    Self-Stabilizing Distributed Cooperative Reset

    Get PDF
    Self-stabilization is a versatile fault-tolerance approach that characterizes the ability of a system to eventually resume a correct behavior after any finite number of transient faults. In this paper, we propose a self-stabilizing reset algorithm working in anonymous networks. This algorithm resets the network in a distributed non-centralized manner, i.e., it is multi-initiator, as each process detecting an inconsistency may initiate a reset. It is also cooperative in the sense that it coordinates concurrent reset executions in order to gain efficiency. Our approach is general since our reset algorithm allows to build self-stabilizing solutions for various problems and settings. As a matter of facts, we show that it applies to both static and dynamic specifications since we propose efficient self-stabilizing reset-based algorithms for the (1-minimal) (f, g)-alliance (a generalization of the dominating set problem) in identified networks and the unison problem in anonymous networks. Notice that these two latter instantiations enhance the state of the art. Indeed, in the former case, our solution is more general than the previous ones, while in the latter case, the complexity of our unison algorithm is better than that of previous solutions of the literature
    corecore