1,013 research outputs found

    Polynomial-Time Space-Optimal Silent Self-Stabilizing Minimum-Degree Spanning Tree Construction

    Full text link
    Motivated by applications to sensor networks, as well as to many other areas, this paper studies the construction of minimum-degree spanning trees. We consider the classical node-register state model, with a weakly fair scheduler, and we present a space-optimal \emph{silent} self-stabilizing construction of minimum-degree spanning trees in this model. Computing a spanning tree with minimum degree is NP-hard. Therefore, we actually focus on constructing a spanning tree whose degree is within one from the optimal. Our algorithm uses registers on O(logn)O(\log n) bits, converges in a polynomial number of rounds, and performs polynomial-time computation at each node. Specifically, the algorithm constructs and stabilizes on a special class of spanning trees, with degree at most OPT+1OPT+1. Indeed, we prove that, unless NP == coNP, there are no proof-labeling schemes involving polynomial-time computation at each node for the whole family of spanning trees with degree at most OPT+1OPT+1. Up to our knowledge, this is the first example of the design of a compact silent self-stabilizing algorithm constructing, and stabilizing on a subset of optimal solutions to a natural problem for which there are no time-efficient proof-labeling schemes. On our way to design our algorithm, we establish a set of independent results that may have interest on their own. In particular, we describe a new space-optimal silent self-stabilizing spanning tree construction, stabilizing on \emph{any} spanning tree, in O(n)O(n) rounds, and using just \emph{one} additional bit compared to the size of the labels used to certify trees. We also design a silent loop-free self-stabilizing algorithm for transforming a tree into another tree. Last but not least, we provide a silent self-stabilizing algorithm for computing and certifying the labels of a NCA-labeling scheme

    Silent MST approximation for tiny memory

    Get PDF
    In network distributed computing, minimum spanning tree (MST) is one of the key problems, and silent self-stabilization one of the most demanding fault-tolerance properties. For this problem and this model, a polynomial-time algorithm with O(log2 ⁣n)O(\log^2\!n) memory is known for the state model. This is memory optimal for weights in the classic [1,poly(n)][1,\text{poly}(n)] range (where nn is the size of the network). In this paper, we go below this O(log2 ⁣n)O(\log^2\!n) memory, using approximation and parametrized complexity. More specifically, our contributions are two-fold. We introduce a second parameter~ss, which is the space needed to encode a weight, and we design a silent polynomial-time self-stabilizing algorithm, with space O(logns)O(\log n \cdot s). In turn, this allows us to get an approximation algorithm for the problem, with a trade-off between the approximation ratio of the solution and the space used. For polynomial weights, this trade-off goes smoothly from memory O(logn)O(\log n) for an nn-approximation, to memory O(log2 ⁣n)O(\log^2\!n) for exact solutions, with for example memory O(lognloglogn)O(\log n\log\log n) for a 2-approximation

    Memory lower bounds for deterministic self-stabilization

    Full text link
    In the context of self-stabilization, a \emph{silent} algorithm guarantees that the register of every node does not change once the algorithm has stabilized. At the end of the 90's, Dolev et al. [Acta Inf. '99] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent algorithm must use a memory of Ω(logn)\Omega(\log n) bits per register in nn-node networks. Similarly, Korman et al. [Dist. Comp. '07] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning trees (MST), every silent algorithm must use a memory of Ω(log2n)\Omega(\log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for general algorithms, also established at the end of the 90's, is due to Beauquier et al.~[PODC '99] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing a tight lower bound of Θ(logΔ+loglogn)\Theta(\log \Delta+\log \log n) bits per register for self-stabilizing algorithms solving (Δ+1)(\Delta+1)-coloring or constructing a spanning tree in networks of maximum degree~Δ\Delta. The lower bound Ω(loglogn)\Omega(\log \log n) bits per register also holds for leader election

    Schéma général auto-stabilisant et silencieux de constructions de type arbres couvrants

    Get PDF
    International audienceNous proposons un schéma général, appelé Scheme, qui calcule des structures de données de type arbres couvrants dans des réseaux quelconques. Scheme est auto-stabilisant, silencieux et malgré sa généralité, est aussi efficace. Il est écrit dans le modèle à mémoires localement partagées avec atomicité composite, et supposeun démon distribué inéquitable, l'hypothèse la plus faible concernant l'ordonnancement dans ce modèle. Son temps de stabilisation est d'au plus 4 nmax rondes, où nmax est le nombre maximum de processus dans une composante connexe. Nous montrons également des bornes supérieures polynomiales sur le temps de stabilisation en nombre de pas et de mouvements pour de grandes classes d'instances de l'algorithme Scheme. Nous illustrons la souplesse de notre approche en décrivant de telles instances résolvant des problèmes classiques tels que l'élection de leader et la construction d'arbres couvrants

    Introduction to local certification

    Full text link
    A distributed graph algorithm is basically an algorithm where every node of a graph can look at its neighborhood at some distance in the graph and chose its output. As distributed environment are subject to faults, an important issue is to be able to check that the output is correct, or in general that the network is in proper configuration with respect to some predicate. One would like this checking to be very local, to avoid using too much resources. Unfortunately most predicates cannot be checked this way, and that is where certification comes into play. Local certification (also known as proof-labeling schemes, locally checkable proofs or distributed verification) consists in assigning labels to the nodes, that certify that the configuration is correct. There are several point of view on this topic: it can be seen as a part of self-stabilizing algorithms, as labeling problem, or as a non-deterministic distributed decision. This paper is an introduction to the domain of local certification, giving an overview of the history, the techniques and the current research directions.Comment: Last update: minor editin

    Disconnected components detection and rooted shortest-path tree maintenance in networks

    Get PDF
    International audienceMany articles deal with the problem of maintaining a rooted shortest-path tree. However, after some edge deletions, some nodes can be disconnected from the connected component VrV_r of some distinguished node rr. In this case, an additional objective is to ensure the detection of the disconnection by the nodes that no longer belong to VrV_r. We present a detailed analysis of a silent self-stabilizing algorithm. We prove that it solves this more demanding task in anonymous weighted networks with the following additional strong properties: it runs without any knowledge on the network and under the \emph{unfair} daemon, that is without any assumption on the asynchronous model. Moreover, it terminates in less than 2n+D2n+D rounds for a network of nn nodes and hop-diameter DD

    Making local algorithms efficiently self-stabilizing in arbitrary asynchronous environments

    Full text link
    This paper deals with the trade-off between time, workload, and versatility in self-stabilization, a general and lightweight fault-tolerant concept in distributed computing.In this context, we propose a transformer that provides an asynchronous silent self-stabilizing version Trans(AlgI) of any terminating synchronous algorithm AlgI. The transformed algorithm Trans(AlgI) works under the distributed unfair daemon and is efficient both in moves and rounds.Our transformer allows to easily obtain fully-polynomial silent self-stabilizing solutions that are also asymptotically optimal in rounds.We illustrate the efficiency and versatility of our transformer with several efficient (i.e., fully-polynomial) silent self-stabilizing instances solving major distributed computing problems, namely vertex coloring, Breadth-First Search (BFS) spanning tree construction, k-clustering, and leader election

    Self-stabilizing k-clustering in mobile ad hoc networks

    Full text link
    In this thesis, two silent self-stabilizing asynchronous distributed algorithms are given for constructing a k-clustering of a connected network of processes. These are the first self-stabilizing solutions to this problem. One algorithm, FLOOD, takes O( k) time and uses O(k log n) space per process, while the second algorithm, BFS-MIS-CLSTR, takes O(n) time and uses O(log n) space; where n is the size of the network. Processes have unique IDs, and there is no designated leader. BFS-MIS-CLSTR solves three problems; it elects a leader and constructs a BFS tree for the network, constructs a minimal independent set, and finally a k-clustering. Finding a minimal k-clustering is known to be NP -hard. If the network is a unit disk graph in a plane, BFS-MIS-CLSTR is within a factor of O(7.2552k) of choosing the minimal number of clusters; A lower bound is given, showing that any comparison-based algorithm for the k-clustering problem that takes o( diam) rounds has very bad worst case performance; Keywords: BFS tree construction, K-clustering, leader election, MIS construction, self-stabilization, unit disk graph

    Trade-off between Time, Space, and Workload: the case of the Self-stabilizing Unison

    Full text link
    We present a self-stabilizing algorithm for the (asynchronous) unison problem which achieves an efficient trade-off between time, workload, and space in a weak model. Precisely, our algorithm is defined in the atomic-state model and works in anonymous networks in which even local ports are unlabeled. It makes no assumption on the daemon and thus stabilizes under the weakest one: the distributed unfair daemon. In a nn-node network of diameter DD and assuming a period B2D+2B \geq 2D+2, our algorithm only requires O(logB)O(\log B) bits per node to achieve full polynomiality as it stabilizes in at most 2D22D-2 rounds and O(min(n2B,n3))O(\min(n^2B, n^3)) moves. In particular and to the best of our knowledge, it is the first self-stabilizing unison for arbitrary anonymous networks achieving an asymptotically optimal stabilization time in rounds using a bounded memory at each node. Finally, we show that our solution allows to efficiently simulate synchronous self-stabilizing algorithms in an asynchronous environment. This provides a new state-of-the-art algorithm solving both the leader election and the spanning tree construction problem in any identified connected network which, to the best of our knowledge, beat all existing solutions of the literature.Comment: arXiv admin note: substantial text overlap with arXiv:2307.0663
    corecore