2,963 research outputs found

    Self-Stabilizing Token Distribution with Constant-Space for Trees

    Get PDF
    Self-stabilizing and silent distributed algorithms for token distribution in rooted tree networks are given. Initially, each process of a graph holds at most l tokens. Our goal is to distribute the tokens in the whole network so that every process holds exactly k tokens. In the initial configuration, the total number of tokens in the network may not be equal to nk where n is the number of processes in the network. The root process is given the ability to create a new token or remove a token from the network. We aim to minimize the convergence time, the number of token moves, and the space complexity. A self-stabilizing token distribution algorithm that converges within O(n l) asynchronous rounds and needs Theta(nh epsilon) redundant (or unnecessary) token moves is given, where epsilon = min(k,l-k) and h is the height of the tree network. Two novel ideas to reduce the number of redundant token moves are presented. One reduces the number of redundant token moves to O(nh) without any additional costs while the other reduces the number of redundant token moves to O(n), but increases the convergence time to O(nh l). All algorithms given have constant memory at each process and each link register

    Silent MST approximation for tiny memory

    Get PDF
    In network distributed computing, minimum spanning tree (MST) is one of the key problems, and silent self-stabilization one of the most demanding fault-tolerance properties. For this problem and this model, a polynomial-time algorithm with O(logā”2ā€‰ā£n)O(\log^2\!n) memory is known for the state model. This is memory optimal for weights in the classic [1,poly(n)][1,\text{poly}(n)] range (where nn is the size of the network). In this paper, we go below this O(logā”2ā€‰ā£n)O(\log^2\!n) memory, using approximation and parametrized complexity. More specifically, our contributions are two-fold. We introduce a second parameter~ss, which is the space needed to encode a weight, and we design a silent polynomial-time self-stabilizing algorithm, with space O(logā”nā‹…s)O(\log n \cdot s). In turn, this allows us to get an approximation algorithm for the problem, with a trade-off between the approximation ratio of the solution and the space used. For polynomial weights, this trade-off goes smoothly from memory O(logā”n)O(\log n) for an nn-approximation, to memory O(logā”2ā€‰ā£n)O(\log^2\!n) for exact solutions, with for example memory O(logā”nlogā”logā”n)O(\log n\log\log n) for a 2-approximation

    Communication Efficient Self-Stabilizing Leader Election

    Get PDF
    This paper presents a randomized self-stabilizing algorithm that elects a leader r in a general n-node undirected graph and constructs a spanning tree T rooted at r. The algorithm works under the synchronous message passing network model, assuming that the nodes know a linear upper bound on n and that each edge has a unique ID known to both its endpoints (or, alternatively, assuming the KT? model). The highlight of this algorithm is its superior communication efficiency: It is guaranteed to send a total of O? (n) messages, each of constant size, till stabilization, while stabilizing in O? (n) rounds, in expectation and with high probability. After stabilization, the algorithm sends at most one constant size message per round while communicating only over the (n - 1) edges of T. In all these aspects, the communication overhead of the new algorithm is far smaller than that of the existing (mostly deterministic) self-stabilizing leader election algorithms. The algorithm is relatively simple and relies mostly on known modules that are common in the fault free leader election literature; these modules are enhanced in various subtle ways in order to assemble them into a communication efficient self-stabilizing algorithm

    Fast and compact self-stabilizing verification, computation, and fault detection of an MST

    Get PDF
    This paper demonstrates the usefulness of distributed local verification of proofs, as a tool for the design of self-stabilizing algorithms.In particular, it introduces a somewhat generalized notion of distributed local proofs, and utilizes it for improving the time complexity significantly, while maintaining space optimality. As a result, we show that optimizing the memory size carries at most a small cost in terms of time, in the context of Minimum Spanning Tree (MST). That is, we present algorithms that are both time and space efficient for both constructing an MST and for verifying it.This involves several parts that may be considered contributions in themselves.First, we generalize the notion of local proofs, trading off the time complexity for memory efficiency. This adds a dimension to the study of distributed local proofs, which has been gaining attention recently. Specifically, we design a (self-stabilizing) proof labeling scheme which is memory optimal (i.e., O(logā”n)O(\log n) bits per node), and whose time complexity is O(logā”2n)O(\log ^2 n) in synchronous networks, or O(Ī”logā”3n)O(\Delta \log ^3 n) time in asynchronous ones, where Ī”\Delta is the maximum degree of nodes. This answers an open problem posed by Awerbuch and Varghese (FOCS 1991). We also show that Ī©(logā”n)\Omega(\log n) time is necessary, even in synchronous networks. Another property is that if ff faults occurred, then, within the requireddetection time above, they are detected by some node in the O(flogā”n)O(f\log n) locality of each of the faults.Second, we show how to enhance a known transformer that makes input/output algorithms self-stabilizing. It now takes as input an efficient construction algorithm and an efficient self-stabilizing proof labeling scheme, and produces an efficient self-stabilizing algorithm. When used for MST, the transformer produces a memory optimal self-stabilizing algorithm, whose time complexity, namely, O(n)O(n), is significantly better even than that of previous algorithms. (The time complexity of previous MST algorithms that used Ī©(logā”2n)\Omega(\log^2 n) memory bits per node was O(n2)O(n^2), and the time for optimal space algorithms was O(nāˆ£Eāˆ£)O(n|E|).) Inherited from our proof labelling scheme, our self-stabilising MST construction algorithm also has the following two properties: (1) if faults occur after the construction ended, then they are detected by some nodes within O(logā”2n)O(\log ^2 n) time in synchronous networks, or within O(Ī”logā”3n)O(\Delta \log ^3 n) time in asynchronous ones, and (2) if ff faults occurred, then, within the required detection time above, they are detected within the O(flogā”n)O(f\log n) locality of each of the faults. We also show how to improve the above two properties, at the expense of some increase in the memory
    • ā€¦
    corecore