468 research outputs found

    Polynomial-Time Space-Optimal Silent Self-Stabilizing Minimum-Degree Spanning Tree Construction

    Full text link
    Motivated by applications to sensor networks, as well as to many other areas, this paper studies the construction of minimum-degree spanning trees. We consider the classical node-register state model, with a weakly fair scheduler, and we present a space-optimal \emph{silent} self-stabilizing construction of minimum-degree spanning trees in this model. Computing a spanning tree with minimum degree is NP-hard. Therefore, we actually focus on constructing a spanning tree whose degree is within one from the optimal. Our algorithm uses registers on O(logn)O(\log n) bits, converges in a polynomial number of rounds, and performs polynomial-time computation at each node. Specifically, the algorithm constructs and stabilizes on a special class of spanning trees, with degree at most OPT+1OPT+1. Indeed, we prove that, unless NP == coNP, there are no proof-labeling schemes involving polynomial-time computation at each node for the whole family of spanning trees with degree at most OPT+1OPT+1. Up to our knowledge, this is the first example of the design of a compact silent self-stabilizing algorithm constructing, and stabilizing on a subset of optimal solutions to a natural problem for which there are no time-efficient proof-labeling schemes. On our way to design our algorithm, we establish a set of independent results that may have interest on their own. In particular, we describe a new space-optimal silent self-stabilizing spanning tree construction, stabilizing on \emph{any} spanning tree, in O(n)O(n) rounds, and using just \emph{one} additional bit compared to the size of the labels used to certify trees. We also design a silent loop-free self-stabilizing algorithm for transforming a tree into another tree. Last but not least, we provide a silent self-stabilizing algorithm for computing and certifying the labels of a NCA-labeling scheme

    Memory lower bounds for deterministic self-stabilization

    Full text link
    In the context of self-stabilization, a \emph{silent} algorithm guarantees that the register of every node does not change once the algorithm has stabilized. At the end of the 90's, Dolev et al. [Acta Inf. '99] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent algorithm must use a memory of Ω(logn)\Omega(\log n) bits per register in nn-node networks. Similarly, Korman et al. [Dist. Comp. '07] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning trees (MST), every silent algorithm must use a memory of Ω(log2n)\Omega(\log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for general algorithms, also established at the end of the 90's, is due to Beauquier et al.~[PODC '99] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing a tight lower bound of Θ(logΔ+loglogn)\Theta(\log \Delta+\log \log n) bits per register for self-stabilizing algorithms solving (Δ+1)(\Delta+1)-coloring or constructing a spanning tree in networks of maximum degree~Δ\Delta. The lower bound Ω(loglogn)\Omega(\log \log n) bits per register also holds for leader election

    Silent MST approximation for tiny memory

    Get PDF
    In network distributed computing, minimum spanning tree (MST) is one of the key problems, and silent self-stabilization one of the most demanding fault-tolerance properties. For this problem and this model, a polynomial-time algorithm with O(log2 ⁣n)O(\log^2\!n) memory is known for the state model. This is memory optimal for weights in the classic [1,poly(n)][1,\text{poly}(n)] range (where nn is the size of the network). In this paper, we go below this O(log2 ⁣n)O(\log^2\!n) memory, using approximation and parametrized complexity. More specifically, our contributions are two-fold. We introduce a second parameter~ss, which is the space needed to encode a weight, and we design a silent polynomial-time self-stabilizing algorithm, with space O(logns)O(\log n \cdot s). In turn, this allows us to get an approximation algorithm for the problem, with a trade-off between the approximation ratio of the solution and the space used. For polynomial weights, this trade-off goes smoothly from memory O(logn)O(\log n) for an nn-approximation, to memory O(log2 ⁣n)O(\log^2\!n) for exact solutions, with for example memory O(lognloglogn)O(\log n\log\log n) for a 2-approximation

    Self-stabilizing k-clustering in mobile ad hoc networks

    Full text link
    In this thesis, two silent self-stabilizing asynchronous distributed algorithms are given for constructing a k-clustering of a connected network of processes. These are the first self-stabilizing solutions to this problem. One algorithm, FLOOD, takes O( k) time and uses O(k log n) space per process, while the second algorithm, BFS-MIS-CLSTR, takes O(n) time and uses O(log n) space; where n is the size of the network. Processes have unique IDs, and there is no designated leader. BFS-MIS-CLSTR solves three problems; it elects a leader and constructs a BFS tree for the network, constructs a minimal independent set, and finally a k-clustering. Finding a minimal k-clustering is known to be NP -hard. If the network is a unit disk graph in a plane, BFS-MIS-CLSTR is within a factor of O(7.2552k) of choosing the minimal number of clusters; A lower bound is given, showing that any comparison-based algorithm for the k-clustering problem that takes o( diam) rounds has very bad worst case performance; Keywords: BFS tree construction, K-clustering, leader election, MIS construction, self-stabilization, unit disk graph

    Algorithmes auto-stabilisants pour la construction de structures couvrantes réparties

    Get PDF
    This thesis deals with the self-stabilizing construction of spanning structures over a distributed system. Self-stabilization is a paradigm for fault-tolerance in distributed algorithms. It guarantees that the system eventually satisfies its specification after transient faults hit the system. Our model of distributed system assumes locally shared memories for communicating, unique identifiers for symmetry-breaking, and distributed daemon for execution scheduling, that is, the weakest proper daemon. More generally, we aim for the weakest possible assumptions, such as arbitrary topologies, in order to propose the most versatile constructions of distributed spanning structures. We present four original self-stabilizing algorithms achieving k-clustering, (f,g)-alliance construction, and ranking. For every of these problems, we prove the correctness of our solutions. Moreover, we analyze their time and space complexity using formal proofs and simulations. Finally, for the (f,g)-alliance problem, we consider the notion of safe convergence in addition to self-stabilization. It enforces the system to first quickly satisfy a specification that guarantees a minimum of conditions, and then to converge to a more stringent specification.Cette thèse s'intéresse à la construction auto-stabilisante de structures couvrantes dans un système réparti. L'auto-stabilisation est un paradigme pour la tolérance aux fautes dans les algorithmes répartis. Plus précisément, elle garantit que le système retrouve un comportement correct en temps fini après avoir été perturbé par des fautes transitoires. Notre modèle de système réparti se base sur des mémoires localement partagées pour la communication, des identifiants uniques pour briser les symétries et un ordonnanceur inéquitable, c'est-à-dire le plus faible des ordonnanceurs. Dans la mesure du possible, nous nous imposons d'utiliser les plus faibles hypothèses, afin d'obtenir les constructions les plus générales de structures couvrantes réparties. Nous présentons quatre algorithmes auto-stabilisants originaux pour le k-partitionnement, la construction d'une (f,g)-alliance et l'indexation. Pour chacun de ces problèmes, nous prouvons la correction de nos solutions. De plus, nous analysons leur complexité en temps et en espace à l'aide de preuves formelles et de simulations. Enfin, pour le problème de (f,g)-alliance, nous prenons en compte la notion de convergence sûre qui vient s'ajouter à celle d'auto-stabilisation. Elle garantit d'abord que le comportement du système assure rapidement un minimum de conditions, puis qu'il continue de converger jusqu'à se conformer à une spécification plus exigeante

    Probabilistic methods for distributed information dissemination

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 457-484).The ever-increasing growth of modern networks comes with a paradigm shift in network operation. Networks can no longer be abstracted as deterministic, centrally controlled systems with static topologies but need to be understood as highly distributed, dynamic systems with inherent unreliabilities. This makes many communication, coordination and computation tasks challenging and in many scenarios communication becomes a crucial bottleneck. In this thesis, we develop new algorithms and techniques to address these challenges. In particular we concentrate on broadcast and information dissemination tasks and introduce novel ideas on how randomization can lead to powerful, simple and practical communication primitives suitable for these modern networks. In this endeavor we combine and further develop tools from different disciplines trying to simultaneously addresses the distributed, information theoretic and algorithmic aspects of network communication. The two main probabilistic techniques developed to disseminate information in a network are gossip and random linear network coding. Gossip is an alternative to classical flooding approaches: Instead of nodes repeatedly forwarding information to all their neighbors, gossiping nodes forward information only to a small number of (random) neighbors. We show that, when done right, gossip disperses information almost as quickly as flooding, albeit with a drastically reduced communication overhead. Random linear network coding (RLNC) applies when a large amount of information or many messages are to be disseminated. Instead of routing messages through intermediate nodes, that is, following a classical store-and-forward approach, RLNC mixes messages together by forwarding random linear combinations of messages. The simplicity and topology-obliviousness of this approach makes RLNC particularly interesting for the distributed settings considered in this thesis. Unfortunately the performance of RLNC was not well understood even for the simplest such settings. We introduce a simple yet powerful analysis technique that allows us to prove optimal performance guarantees for all settings considered in the literature and many more that were not analyzable so far. Specifically, we give many new results for RLNC gossip algorithms, RLNC algorithms for dynamic networks, and RLNC with correlated data. We also provide a novel highly efficient distributed implementation of RLNC that achieves these performance guarantees while buffering only a minimal amount of information at intermediate nodes. We then apply our techniques to improve communication primitives in multi-hop radio networks. While radio networks inherently support broadcast communications, e.g., from one node to all surrounding nodes, interference of simultaneous transmissions makes multihop broadcast communication an interesting challenge. We show that, again, randomization holds the key for obtaining simple, efficient and distributed information dissemination protocols. In particular, using random back-off strategies to coordinate access to the shared medium leads to optimal gossip-like communications and applying RLNC achieves the first throughput-optimal multi-message communication primitives. Lastly we apply our probabilistic approach for analyzing simple, distributed propagation protocols in a broader context by studying algorithms for the Lovász Local Lemma. These algorithms find solutions to certain local constraint satisfaction problems by randomly fixing and propagating violations locally. Our two main results show that, firstly, there are also efficient deterministic propagation strategies achieving the same and, secondly, using the random fixing strategy has the advantage of producing not just an arbitrary solution but an approximately uniformly random one. Both results lead to simple, constructions for a many locally consistent structures of interest that were not known to be efficiently constructable before.by Bernhard Haeupler.Ph.D

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
    corecore