4,959 research outputs found

    Communication Efficiency in Self-stabilizing Silent Protocols

    Get PDF
    Self-stabilization is a general paradigm to provide forward recovery capabilities to distributed systems and networks. Intuitively, a protocol is self-stabilizing if it is able to recover without external intervention from any catastrophic transient failure. In this paper, our focus is to lower the communication complexity of self-stabilizing protocols \emph{below} the need of checking every neighbor forever. In more details, the contribution of the paper is threefold: (i) We provide new complexity measures for communication efficiency of self-stabilizing protocols, especially in the stabilized phase or when there are no faults, (ii) On the negative side, we show that for non-trivial problems such as coloring, maximal matching, and maximal independent set, it is impossible to get (deterministic or probabilistic) self-stabilizing solutions where every participant communicates with less than every neighbor in the stabilized phase, and (iii) On the positive side, we present protocols for coloring, maximal matching, and maximal independent set such that a fraction of the participants communicates with exactly one neighbor in the stabilized phase

    Communication Efficiency in Self-stabilizing Silent Protocols

    Get PDF
    Self-stabilization is a general paradigm to provide forward recovery capabilities to distributed systems and networks. Intuitively, a protocol is self-stabilizing if it is able to recover without external intervention from any catastrophic transient failure. In this paper, our focus is to lower the communication complexity of self-stabilizing protocols \emph{below} the need of checking every neighbor forever. In more details, the contribution of the paper is threefold: (i) We provide new complexity measures for communication efficiency of self-stabilizing protocols, especially in the stabilized phase or when there are no faults, (ii) On the negative side, we show that for non-trivial problems such as coloring, maximal matching, and maximal independent set, it is impossible to get (deterministic or probabilistic) self-stabilizing solutions where every participant communicates with less than every neighbor in the stabilized phase, and (iii) On the positive side, we present protocols for coloring, maximal matching, and maximal independent set such that a fraction of the participants communicates with exactly one neighbor in the stabilized phase

    Compact Deterministic Self-Stabilizing Leader Election: The Exponential Advantage of Being Talkative

    Full text link
    This paper focuses on compact deterministic self-stabilizing solutions for the leader election problem. When the protocol is required to be \emph{silent} (i.e., when communication content remains fixed from some point in time during any execution), there exists a lower bound of Omega(\log n) bits of memory per node participating to the leader election (where n denotes the number of nodes in the system). This lower bound holds even in rings. We present a new deterministic (non-silent) self-stabilizing protocol for n-node rings that uses only O(\log\log n) memory bits per node, and stabilizes in O(n\log^2 n) rounds. Our protocol has several attractive features that make it suitable for practical purposes. First, the communication model fits with the model used by existing compilers for real networks. Second, the size of the ring (or any upper bound on this size) needs not to be known by any node. Third, the node identifiers can be of various sizes. Finally, no synchrony assumption, besides a weakly fair scheduler, is assumed. Therefore, our result shows that, perhaps surprisingly, trading silence for exponential improvement in term of memory space does not come at a high cost regarding stabilization time or minimal assumptions

    Memory lower bounds for deterministic self-stabilization

    Full text link
    In the context of self-stabilization, a \emph{silent} algorithm guarantees that the register of every node does not change once the algorithm has stabilized. At the end of the 90's, Dolev et al. [Acta Inf. '99] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent algorithm must use a memory of Ω(log⁥n)\Omega(\log n) bits per register in nn-node networks. Similarly, Korman et al. [Dist. Comp. '07] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning trees (MST), every silent algorithm must use a memory of Ω(log⁥2n)\Omega(\log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for general algorithms, also established at the end of the 90's, is due to Beauquier et al.~[PODC '99] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing a tight lower bound of Θ(log⁡Δ+log⁥log⁥n)\Theta(\log \Delta+\log \log n) bits per register for self-stabilizing algorithms solving (Δ+1)(\Delta+1)-coloring or constructing a spanning tree in networks of maximum degree~Δ\Delta. The lower bound Ω(log⁥log⁥n)\Omega(\log \log n) bits per register also holds for leader election

    Brief Announcement: Memory Lower Bounds for Self-Stabilization

    Get PDF
    In the context of self-stabilization, a silent algorithm guarantees that the communication registers (a.k.a register) of every node do not change once the algorithm has stabilized. At the end of the 90\u27s, Dolev et al. [Acta Inf. \u2799] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent deterministic algorithm must use a memory of Omega(log n) bits per register in n-node networks. Similarly, Korman et al. [Dist. Comp. \u2707] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning tree (MST), every silent algorithm must use a memory of Omega(log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary deterministic self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for deterministic general algorithms, also established at the end of the 90\u27s, is due to Beauquier et al. [PODC \u2799] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing the lower bound Omega(log log n) bits per register for deterministic self-stabilizing algorithms solving (Delta+1)-coloring, leader election or constructing a spanning tree in networks of maximum degree Delta

    Polynomial-Time Space-Optimal Silent Self-Stabilizing Minimum-Degree Spanning Tree Construction

    Full text link
    Motivated by applications to sensor networks, as well as to many other areas, this paper studies the construction of minimum-degree spanning trees. We consider the classical node-register state model, with a weakly fair scheduler, and we present a space-optimal \emph{silent} self-stabilizing construction of minimum-degree spanning trees in this model. Computing a spanning tree with minimum degree is NP-hard. Therefore, we actually focus on constructing a spanning tree whose degree is within one from the optimal. Our algorithm uses registers on O(log⁥n)O(\log n) bits, converges in a polynomial number of rounds, and performs polynomial-time computation at each node. Specifically, the algorithm constructs and stabilizes on a special class of spanning trees, with degree at most OPT+1OPT+1. Indeed, we prove that, unless NP == coNP, there are no proof-labeling schemes involving polynomial-time computation at each node for the whole family of spanning trees with degree at most OPT+1OPT+1. Up to our knowledge, this is the first example of the design of a compact silent self-stabilizing algorithm constructing, and stabilizing on a subset of optimal solutions to a natural problem for which there are no time-efficient proof-labeling schemes. On our way to design our algorithm, we establish a set of independent results that may have interest on their own. In particular, we describe a new space-optimal silent self-stabilizing spanning tree construction, stabilizing on \emph{any} spanning tree, in O(n)O(n) rounds, and using just \emph{one} additional bit compared to the size of the labels used to certify trees. We also design a silent loop-free self-stabilizing algorithm for transforming a tree into another tree. Last but not least, we provide a silent self-stabilizing algorithm for computing and certifying the labels of a NCA-labeling scheme

    Communication Efficient Self-Stabilizing Leader Election

    Get PDF
    This paper presents a randomized self-stabilizing algorithm that elects a leader r in a general n-node undirected graph and constructs a spanning tree T rooted at r. The algorithm works under the synchronous message passing network model, assuming that the nodes know a linear upper bound on n and that each edge has a unique ID known to both its endpoints (or, alternatively, assuming the KT? model). The highlight of this algorithm is its superior communication efficiency: It is guaranteed to send a total of O? (n) messages, each of constant size, till stabilization, while stabilizing in O? (n) rounds, in expectation and with high probability. After stabilization, the algorithm sends at most one constant size message per round while communicating only over the (n - 1) edges of T. In all these aspects, the communication overhead of the new algorithm is far smaller than that of the existing (mostly deterministic) self-stabilizing leader election algorithms. The algorithm is relatively simple and relies mostly on known modules that are common in the fault free leader election literature; these modules are enhanced in various subtle ways in order to assemble them into a communication efficient self-stabilizing algorithm
    • 

    corecore