365 research outputs found

    Polynomial-Time Space-Optimal Silent Self-Stabilizing Minimum-Degree Spanning Tree Construction

    Full text link
    Motivated by applications to sensor networks, as well as to many other areas, this paper studies the construction of minimum-degree spanning trees. We consider the classical node-register state model, with a weakly fair scheduler, and we present a space-optimal \emph{silent} self-stabilizing construction of minimum-degree spanning trees in this model. Computing a spanning tree with minimum degree is NP-hard. Therefore, we actually focus on constructing a spanning tree whose degree is within one from the optimal. Our algorithm uses registers on O(logn)O(\log n) bits, converges in a polynomial number of rounds, and performs polynomial-time computation at each node. Specifically, the algorithm constructs and stabilizes on a special class of spanning trees, with degree at most OPT+1OPT+1. Indeed, we prove that, unless NP == coNP, there are no proof-labeling schemes involving polynomial-time computation at each node for the whole family of spanning trees with degree at most OPT+1OPT+1. Up to our knowledge, this is the first example of the design of a compact silent self-stabilizing algorithm constructing, and stabilizing on a subset of optimal solutions to a natural problem for which there are no time-efficient proof-labeling schemes. On our way to design our algorithm, we establish a set of independent results that may have interest on their own. In particular, we describe a new space-optimal silent self-stabilizing spanning tree construction, stabilizing on \emph{any} spanning tree, in O(n)O(n) rounds, and using just \emph{one} additional bit compared to the size of the labels used to certify trees. We also design a silent loop-free self-stabilizing algorithm for transforming a tree into another tree. Last but not least, we provide a silent self-stabilizing algorithm for computing and certifying the labels of a NCA-labeling scheme

    Memory lower bounds for deterministic self-stabilization

    Full text link
    In the context of self-stabilization, a \emph{silent} algorithm guarantees that the register of every node does not change once the algorithm has stabilized. At the end of the 90's, Dolev et al. [Acta Inf. '99] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent algorithm must use a memory of Ω(logn)\Omega(\log n) bits per register in nn-node networks. Similarly, Korman et al. [Dist. Comp. '07] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning trees (MST), every silent algorithm must use a memory of Ω(log2n)\Omega(\log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for general algorithms, also established at the end of the 90's, is due to Beauquier et al.~[PODC '99] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing a tight lower bound of Θ(logΔ+loglogn)\Theta(\log \Delta+\log \log n) bits per register for self-stabilizing algorithms solving (Δ+1)(\Delta+1)-coloring or constructing a spanning tree in networks of maximum degree~Δ\Delta. The lower bound Ω(loglogn)\Omega(\log \log n) bits per register also holds for leader election

    Survey of Distributed Decision

    Get PDF
    We survey the recent distributed computing literature on checking whether a given distributed system configuration satisfies a given boolean predicate, i.e., whether the configuration is legal or illegal w.r.t. that predicate. We consider classical distributed computing environments, including mostly synchronous fault-free network computing (LOCAL and CONGEST models), but also asynchronous crash-prone shared-memory computing (WAIT-FREE model), and mobile computing (FSYNC model)

    Making local algorithms efficiently self-stabilizing in arbitrary asynchronous environments

    Full text link
    This paper deals with the trade-off between time, workload, and versatility in self-stabilization, a general and lightweight fault-tolerant concept in distributed computing.In this context, we propose a transformer that provides an asynchronous silent self-stabilizing version Trans(AlgI) of any terminating synchronous algorithm AlgI. The transformed algorithm Trans(AlgI) works under the distributed unfair daemon and is efficient both in moves and rounds.Our transformer allows to easily obtain fully-polynomial silent self-stabilizing solutions that are also asymptotically optimal in rounds.We illustrate the efficiency and versatility of our transformer with several efficient (i.e., fully-polynomial) silent self-stabilizing instances solving major distributed computing problems, namely vertex coloring, Breadth-First Search (BFS) spanning tree construction, k-clustering, and leader election

    Introduction to local certification

    Full text link
    A distributed graph algorithm is basically an algorithm where every node of a graph can look at its neighborhood at some distance in the graph and chose its output. As distributed environment are subject to faults, an important issue is to be able to check that the output is correct, or in general that the network is in proper configuration with respect to some predicate. One would like this checking to be very local, to avoid using too much resources. Unfortunately most predicates cannot be checked this way, and that is where certification comes into play. Local certification (also known as proof-labeling schemes, locally checkable proofs or distributed verification) consists in assigning labels to the nodes, that certify that the configuration is correct. There are several point of view on this topic: it can be seen as a part of self-stabilizing algorithms, as labeling problem, or as a non-deterministic distributed decision. This paper is an introduction to the domain of local certification, giving an overview of the history, the techniques and the current research directions.Comment: Last update: minor editin

    Fast and compact self-stabilizing verification, computation, and fault detection of an MST

    Get PDF
    This paper demonstrates the usefulness of distributed local verification of proofs, as a tool for the design of self-stabilizing algorithms.In particular, it introduces a somewhat generalized notion of distributed local proofs, and utilizes it for improving the time complexity significantly, while maintaining space optimality. As a result, we show that optimizing the memory size carries at most a small cost in terms of time, in the context of Minimum Spanning Tree (MST). That is, we present algorithms that are both time and space efficient for both constructing an MST and for verifying it.This involves several parts that may be considered contributions in themselves.First, we generalize the notion of local proofs, trading off the time complexity for memory efficiency. This adds a dimension to the study of distributed local proofs, which has been gaining attention recently. Specifically, we design a (self-stabilizing) proof labeling scheme which is memory optimal (i.e., O(logn)O(\log n) bits per node), and whose time complexity is O(log2n)O(\log ^2 n) in synchronous networks, or O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, where Δ\Delta is the maximum degree of nodes. This answers an open problem posed by Awerbuch and Varghese (FOCS 1991). We also show that Ω(logn)\Omega(\log n) time is necessary, even in synchronous networks. Another property is that if ff faults occurred, then, within the requireddetection time above, they are detected by some node in the O(flogn)O(f\log n) locality of each of the faults.Second, we show how to enhance a known transformer that makes input/output algorithms self-stabilizing. It now takes as input an efficient construction algorithm and an efficient self-stabilizing proof labeling scheme, and produces an efficient self-stabilizing algorithm. When used for MST, the transformer produces a memory optimal self-stabilizing algorithm, whose time complexity, namely, O(n)O(n), is significantly better even than that of previous algorithms. (The time complexity of previous MST algorithms that used Ω(log2n)\Omega(\log^2 n) memory bits per node was O(n2)O(n^2), and the time for optimal space algorithms was O(nE)O(n|E|).) Inherited from our proof labelling scheme, our self-stabilising MST construction algorithm also has the following two properties: (1) if faults occur after the construction ended, then they are detected by some nodes within O(log2n)O(\log ^2 n) time in synchronous networks, or within O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, and (2) if ff faults occurred, then, within the required detection time above, they are detected within the O(flogn)O(f\log n) locality of each of the faults. We also show how to improve the above two properties, at the expense of some increase in the memory

    Disconnected components detection and rooted shortest-path tree maintenance in networks

    Get PDF
    International audienceMany articles deal with the problem of maintaining a rooted shortest-path tree. However, after some edge deletions, some nodes can be disconnected from the connected component VrV_r of some distinguished node rr. In this case, an additional objective is to ensure the detection of the disconnection by the nodes that no longer belong to VrV_r. We present a detailed analysis of a silent self-stabilizing algorithm. We prove that it solves this more demanding task in anonymous weighted networks with the following additional strong properties: it runs without any knowledge on the network and under the \emph{unfair} daemon, that is without any assumption on the asynchronous model. Moreover, it terminates in less than 2n+D2n+D rounds for a network of nn nodes and hop-diameter DD

    Asynchronous neighborhood task synchronization

    Full text link
    Faults are likely to occur in distributed systems. The motivation for designing self-stabilizing system is to be able to automatically recover from a faulty state. As per Dijkstra\u27s definition, a system is self-stabilizing if it converges to a desired state from an arbitrary state in a finite number of steps. The paradigm of self-stabilization is considered to be the most unified approach to designing fault-tolerant systems. Any type of faults, e.g., transient, process crashes and restart, link failures and recoveries, and byzantine faults, can be handled by a self-stabilizing system; Many applications in distributed systems involve multiple phases. Solving these applications require some degree of synchronization of phases. In this thesis research, we introduce a new problem, called asynchronous neighborhood task synchronization ( NTS ). In this problem, processes execute infinite instances of tasks, where a task consists of a set of steps. There are several requirements for this problem. Simultaneous execution of steps by the neighbors is allowed only if the steps are different. Every neighborhood is synchronized in the sense that all neighboring processes execute the same instance of a task. Although the NTS problem is applicable in nonfaulty environments, it is more challenging to solve this problem considering various types of faults. In this research, we will present a self-stabilizing solution to the NTS problem. The proposed solution is space optimal, fault containing, fully localized, and fully distributed. One of the most desirable properties of our algorithm is that it works under any (including unfair) daemon. We will discuss various applications of the NTS problem
    corecore