8 research outputs found

    Brief Announcement: Memory Lower Bounds for Self-Stabilization

    Get PDF
    In the context of self-stabilization, a silent algorithm guarantees that the communication registers (a.k.a register) of every node do not change once the algorithm has stabilized. At the end of the 90\u27s, Dolev et al. [Acta Inf. \u2799] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent deterministic algorithm must use a memory of Omega(log n) bits per register in n-node networks. Similarly, Korman et al. [Dist. Comp. \u2707] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning tree (MST), every silent algorithm must use a memory of Omega(log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary deterministic self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for deterministic general algorithms, also established at the end of the 90\u27s, is due to Beauquier et al. [PODC \u2799] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing the lower bound Omega(log log n) bits per register for deterministic self-stabilizing algorithms solving (Delta+1)-coloring, leader election or constructing a spanning tree in networks of maximum degree Delta

    Memory lower bounds for deterministic self-stabilization

    Full text link
    In the context of self-stabilization, a \emph{silent} algorithm guarantees that the register of every node does not change once the algorithm has stabilized. At the end of the 90's, Dolev et al. [Acta Inf. '99] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent algorithm must use a memory of Ω(log⁥n)\Omega(\log n) bits per register in nn-node networks. Similarly, Korman et al. [Dist. Comp. '07] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning trees (MST), every silent algorithm must use a memory of Ω(log⁥2n)\Omega(\log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for general algorithms, also established at the end of the 90's, is due to Beauquier et al.~[PODC '99] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing a tight lower bound of Θ(log⁡Δ+log⁥log⁥n)\Theta(\log \Delta+\log \log n) bits per register for self-stabilizing algorithms solving (Δ+1)(\Delta+1)-coloring or constructing a spanning tree in networks of maximum degree~Δ\Delta. The lower bound Ω(log⁥log⁥n)\Omega(\log \log n) bits per register also holds for leader election

    Compact Self-Stabilizing Leader Election for General Networks

    No full text
    International audienc

    Brief announcement: Compact Self-Stabilizing Leader Election for General Networks

    No full text
    International audienceWe present a self-stabilizing leader election algorithm for general networks, with space-complexity O(max⁥{log⁡Δ,log⁥log⁥n})O(\max\{\log \Delta, \log \log n\}) bits per node in nn-node networks with maximum degree~Δ\Delta. This space complexity is sub-logarithmic in nn as long as Δ=no(1)\Delta = n^{o(1)}. The best space-complexity known so far for general networks was O(log⁥n)O(\log n) bits per node, and algorithms with sub-logarithmic space-complexities were known for the ring only. To our knowledge, our algorithm is the first algorithm for self-stabilizing leader election to break the Ω(log⁥n)\Omega(\log n) bound for silent algorithms in general networks. Breaking this bound was obtained via the design of a (non-silent) self-stabilizing algorithm using sophisticated tools such as solving the distance-2 coloring problem in a silent self-stabilizing manner, with space-complexity O(max⁥{log⁡Δ,log⁥log⁥n})O(\max\{\log \Delta, \log \log n\}) bits per node. Solving this latter coloring problem allows us to implement a sub-logarithmic encoding of spanning trees --- storing the IDs of the neighbors requires Ω(log⁥n)\Omega(\log n) bits per node, while we encode spanning trees using O(max⁥{log⁡Δ,log⁥log⁥n})O(\max\{\log \Delta, \log \log n\}) bits per node. Moreover, we show how to construct such compactly encoded spanning trees without relying on variables encoding distances or number of nodes, as these two types of variables would also require Ω(log⁥n)\Omega(\log n) bits per node

    Brief announcement: Compact Self-Stabilizing Leader Election for General Networks

    No full text
    International audienceWe present a self-stabilizing leader election algorithm for general networks, with space-complexity O(max⁥{log⁡Δ,log⁥log⁥n})O(\max\{\log \Delta, \log \log n\}) bits per node in nn-node networks with maximum degree~Δ\Delta. This space complexity is sub-logarithmic in nn as long as Δ=no(1)\Delta = n^{o(1)}. The best space-complexity known so far for general networks was O(log⁥n)O(\log n) bits per node, and algorithms with sub-logarithmic space-complexities were known for the ring only. To our knowledge, our algorithm is the first algorithm for self-stabilizing leader election to break the Ω(log⁥n)\Omega(\log n) bound for silent algorithms in general networks. Breaking this bound was obtained via the design of a (non-silent) self-stabilizing algorithm using sophisticated tools such as solving the distance-2 coloring problem in a silent self-stabilizing manner, with space-complexity O(max⁥{log⁡Δ,log⁥log⁥n})O(\max\{\log \Delta, \log \log n\}) bits per node. Solving this latter coloring problem allows us to implement a sub-logarithmic encoding of spanning trees --- storing the IDs of the neighbors requires Ω(log⁥n)\Omega(\log n) bits per node, while we encode spanning trees using O(max⁥{log⁡Δ,log⁥log⁥n})O(\max\{\log \Delta, \log \log n\}) bits per node. Moreover, we show how to construct such compactly encoded spanning trees without relying on variables encoding distances or number of nodes, as these two types of variables would also require Ω(log⁥n)\Omega(\log n) bits per node
    corecore