19 research outputs found

    Bounds for self-stabilization in unidirectional networks

    Get PDF
    A distributed algorithm is self-stabilizing if after faults and attacks hit the system and place it in some arbitrary global state, the systems recovers from this catastrophic situation without external intervention in finite time. Unidirectional networks preclude many common techniques in self-stabilization from being used, such as preserving local predicates. In this paper, we investigate the intrinsic complexity of achieving self-stabilization in unidirectional networks, and focus on the classical vertex coloring problem. When deterministic solutions are considered, we prove a lower bound of nn states per process (where nn is the network size) and a recovery time of at least n(n1)/2n(n-1)/2 actions in total. We present a deterministic algorithm with matching upper bounds that performs in arbitrary graphs. When probabilistic solutions are considered, we observe that at least Δ+1\Delta + 1 states per process and a recovery time of Ω(n)\Omega(n) actions in total are required (where Δ\Delta denotes the maximal degree of the underlying simple undirected graph). We present a probabilistically self-stabilizing algorithm that uses k\mathtt{k} states per process, where k\mathtt{k} is a parameter of the algorithm. When k=Δ+1\mathtt{k}=\Delta+1, the algorithm recovers in expected O(Δn)O(\Delta n) actions. When k\mathtt{k} may grow arbitrarily, the algorithm recovers in expected O(n) actions in total. Thus, our algorithm can be made optimal with respect to space or time complexity

    Stabilizing data-link over non-FIFO channels with optimal fault-resilience

    Get PDF
    Self-stabilizing systems have the ability to converge to a correct behavior when started in any configuration. Most of the work done so far in the self-stabilization area assumed either communication via shared memory or via FIFO channels. This paper is the first to lay the bases for the design of self-stabilizing message passing algorithms over unreliable non-FIFO channels. We propose a fault-send-deliver optimal stabilizing data-link layer that emulates a reliable FIFO communication channel over unreliable capacity bounded non-FIFO channels

    Silent MST approximation for tiny memory

    Get PDF
    In network distributed computing, minimum spanning tree (MST) is one of the key problems, and silent self-stabilization one of the most demanding fault-tolerance properties. For this problem and this model, a polynomial-time algorithm with O(log2 ⁣n)O(\log^2\!n) memory is known for the state model. This is memory optimal for weights in the classic [1,poly(n)][1,\text{poly}(n)] range (where nn is the size of the network). In this paper, we go below this O(log2 ⁣n)O(\log^2\!n) memory, using approximation and parametrized complexity. More specifically, our contributions are two-fold. We introduce a second parameter~ss, which is the space needed to encode a weight, and we design a silent polynomial-time self-stabilizing algorithm, with space O(logns)O(\log n \cdot s). In turn, this allows us to get an approximation algorithm for the problem, with a trade-off between the approximation ratio of the solution and the space used. For polynomial weights, this trade-off goes smoothly from memory O(logn)O(\log n) for an nn-approximation, to memory O(log2 ⁣n)O(\log^2\!n) for exact solutions, with for example memory O(lognloglogn)O(\log n\log\log n) for a 2-approximation

    Communication Efficiency in Self-stabilizing Silent Protocols

    Get PDF
    Self-stabilization is a general paradigm to provide forward recovery capabilities to distributed systems and networks. Intuitively, a protocol is self-stabilizing if it is able to recover without external intervention from any catastrophic transient failure. In this paper, our focus is to lower the communication complexity of self-stabilizing protocols \emph{below} the need of checking every neighbor forever. In more details, the contribution of the paper is threefold: (i) We provide new complexity measures for communication efficiency of self-stabilizing protocols, especially in the stabilized phase or when there are no faults, (ii) On the negative side, we show that for non-trivial problems such as coloring, maximal matching, and maximal independent set, it is impossible to get (deterministic or probabilistic) self-stabilizing solutions where every participant communicates with less than every neighbor in the stabilized phase, and (iii) On the positive side, we present protocols for coloring, maximal matching, and maximal independent set such that a fraction of the participants communicates with exactly one neighbor in the stabilized phase

    On the computational power of self-stabilizing systems

    Get PDF
    AbstractThe computational power of self-stabilizing distributed systems is examined. Assuming availability of any number of processors, each with (small) constant size memory we show that any computable problem can be realized in a self-stabilizing fashion.The result is derived by presenting a distributed system which tolerates transient faults and simulates the execution of a Turing machine. The total amount of memory required by the distributed system is equal to the memory used by the Turing machine (up to a constant factor)

    Compact Deterministic Self-Stabilizing Leader Election: The Exponential Advantage of Being Talkative

    Full text link
    This paper focuses on compact deterministic self-stabilizing solutions for the leader election problem. When the protocol is required to be \emph{silent} (i.e., when communication content remains fixed from some point in time during any execution), there exists a lower bound of Omega(\log n) bits of memory per node participating to the leader election (where n denotes the number of nodes in the system). This lower bound holds even in rings. We present a new deterministic (non-silent) self-stabilizing protocol for n-node rings that uses only O(\log\log n) memory bits per node, and stabilizes in O(n\log^2 n) rounds. Our protocol has several attractive features that make it suitable for practical purposes. First, the communication model fits with the model used by existing compilers for real networks. Second, the size of the ring (or any upper bound on this size) needs not to be known by any node. Third, the node identifiers can be of various sizes. Finally, no synchrony assumption, besides a weakly fair scheduler, is assumed. Therefore, our result shows that, perhaps surprisingly, trading silence for exponential improvement in term of memory space does not come at a high cost regarding stabilization time or minimal assumptions

    Memory requirements for silent stabilization

    Full text link
    A self-stabilizing algorithm is silent if it converges to a glc)bal state after which the values stored in the com-munication registers are fixed. The silence property of self-stabilizing algorithms is a desirable property in terms of simplicity and communication overhead. In this work we show that no constant memory silent self-stabilizing algorithms exist for identification of the centers of a graph, leader election, and spanning tree construction. Lower bounds of Cl(log n) bits per communication register are obtained for each of the above tasks. The existence of a silent legitimate global state that uses less than log n bits per register is assumed. This legitimate global state is used to construct a silent global state that is illegitimate.
    corecore