3,159 research outputs found

    Self-stabilizing algorithms for Connected Vertex Cover and Clique decomposition problems

    Full text link
    In many wireless networks, there is no fixed physical backbone nor centralized network management. The nodes of such a network have to self-organize in order to maintain a virtual backbone used to route messages. Moreover, any node of the network can be a priori at the origin of a malicious attack. Thus, in one hand the backbone must be fault-tolerant and in other hand it can be useful to monitor all network communications to identify an attack as soon as possible. We are interested in the minimum \emph{Connected Vertex Cover} problem, a generalization of the classical minimum Vertex Cover problem, which allows to obtain a connected backbone. Recently, Delbot et al.~\cite{DelbotLP13} proposed a new centralized algorithm with a constant approximation ratio of 22 for this problem. In this paper, we propose a distributed and self-stabilizing version of their algorithm with the same approximation guarantee. To the best knowledge of the authors, it is the first distributed and fault-tolerant algorithm for this problem. The approach followed to solve the considered problem is based on the construction of a connected minimal clique partition. Therefore, we also design the first distributed self-stabilizing algorithm for this problem, which is of independent interest

    Distributed Computation of Connected Dominating Set for Multi-Hop Wireless Networks

    Get PDF
    AbstractIn large wireless multi-hop networks, routing is a main issue as they include many nodes that span over relatively a large area. In such a scenario, finding smallest set of dominant nodes for forwarding packets would be a good approach for better communication. Connected dominating set (CDS) computation is one of the method to find important nodes in the network. As CDS computation is an NP problem, several approximation algorithms are available but these algorithms have high message complexity. This paper discusses the design and implementation of a distributed algorithm to compute connected dominating sets in a wireless network with the help of network spectral properties. Based on local neighborhood, each node in the network finds its ego centric network. To identify dominant nodes, it uses bridge centrality value of ego centric network. A distributed algorithm is proposed to find nodes to connect dominant nodes which approximates CDS. The algorithm has been applied on networks with different network sizes and varying edge probability distributions. The algorithm outputs 40% important nodes in the network to form back haul communication links with an approximation ratio ≤ 0.04 * ∂ + 1, where ∂ is the maximum node degree. The results confirm that the algorithm contributes to a better performance with reduced message complexity

    Universal Loop-Free Super-Stabilization

    Get PDF
    We propose an univesal scheme to design loop-free and super-stabilizing protocols for constructing spanning trees optimizing any tree metrics (not only those that are isomorphic to a shortest path tree). Our scheme combines a novel super-stabilizing loop-free BFS with an existing self-stabilizing spanning tree that optimizes a given metric. The composition result preserves the best properties of both worlds: super-stabilization, loop-freedom, and optimization of the original metric without any stabilization time penalty. As case study we apply our composition mechanism to two well known metric-dependent spanning trees: the maximum-flow tree and the minimum degree spanning tree

    Self-* distributed query region covering in sensor networks

    Full text link
    Wireless distributed sensor networks are used to monitor a multitude of environments for both civil and military applications. Sensors may be deployed to unreachable or inhospitable areas. Thus, they cannot be replaced easily. However, due to various factors, sensors\u27 internal memory, or the sensors themselves, can become corrupted. Hence, there is a need for more robust sensor networks. Sensors are most commonly densely deployed, but keeping all sensors continually active is not energy efficient. Our aim is to select the minimum number of sensors which can entirely cover a particular monitored area, while remaining strongly connected. This concept is called a Minimum Connected Cover of a query region in a sensor network. In this research, we have designed two fully distributed, robust, self-* solutions to the minimum connected cover of query regions that can cope with both transient faults and sensor crashes. We considered the most general case in which every sensor has a different sensing and communication radius. We have also designed extended versions of the algorithms that use multi-hop information to obtain better results utilizing small atomicity (i.e., each sensor reads only one of its neighbors\u27 variables at a time, instead of reading all neighbors\u27 variables). With this, we have proven self-* (self-configuration, self-stabilization, and self-healing) properties of our solutions, both analytically and experimentally. The simulation results show that our solutions provide better performance in terms of coverage than pre-existing self-stabilizing algorithms

    Lattice Linear Problems vs Algorithms

    Full text link
    Modelling problems using predicates that induce a partial order among global states was introduced as a way to permit asynchronous execution in multiprocessor systems. A key property of such problems is that the predicate induces one lattice in the state space which guarantees that the execution is correct even if nodes execute with old information about their neighbours. Thus, a compiler that is aware of this property can ignore data dependencies and allow the application to continue its execution with the available data rather than waiting for the most recent one. Unfortunately, many interesting problems do not exhibit lattice linearity. This issue was alleviated with the introduction of eventually lattice linear algorithms. Such algorithms induce a partial order in a subset of the state space even though the problem cannot be defined by a predicate under which the states form a partial order. This paper focuses on analyzing and differentiating between lattice linear problems and algorithms. It also introduces a new class of algorithms called (fully) lattice linear algorithms. A characteristic of these algorithms is that the entire reachable state space is partitioned into one or more lattices and the initial state locks into one of these lattices. Thus, under a few additional constraints, the initial state can uniquely determine the final state. For demonstration, we present lattice linear self-stabilizing algorithms for minimal dominating set and graph colouring problems, and a parallel processing 2-approximation algorithm for vertex cover. The algorithm for minimal dominating set converges in n moves, and that for graph colouring converges in n+2m moves. The algorithm for vertex cover is the first lattice linear approximation algorithm for an NP-Hard problem; it converges in n moves. Some part is cut due to 1920 character limit. Please see the pdf for full abstract.Comment: arXiv admin note: text overlap with arXiv:2209.1470

    Pulse propagation, graph cover, and packet forwarding

    Get PDF
    We study distributed systems, with a particular focus on graph problems and fault tolerance. Fault-tolerance in a microprocessor or even System-on-Chip can be improved by using a fault-tolerant pulse propagation design. The existing design TRIX achieves this goal by being a distributed system consisting of very simple nodes. We show that even in the typical mode of operation without faults, TRIX performs significantly better than a regular wire or clock tree: Statistical evaluation of our simulated experiments show that we achieve a skew with standard deviation of O(log log H), where H is the height of the TRIX grid. The distance-r generalization of classic graph problems can give us insights on how distance affects hardness of a problem. For the distance-r dominating set problem, we present both an algorithmic upper and unconditional lower bound for any graph class with certain high-girth and sparseness criteria. In particular, our algorithm achieves a O(r·f(r))-approximation in time O(r), where f is the expansion function, which correlates with density. For constant r, this implies a constant approximation factor, in constant time. We also show that no algorithm can achieve a (2r + 1 − δ)-approximation for any δ > 0 in time O(r), not even on the class of cycles of girth at least 5r. Furthermore, we extend the algorithm to related graph cover problems and even to a different execution model. Furthermore, we investigate the problem of packet forwarding, which addresses the question of how and when best to forward packets in a distributed system. These packets are injected by an adversary. We build on the existing algorithm OED to handle more than a single destination. In particular, we show that buffers of size O(log n) are sufficient for this algorithm, in contrast to O(n) for the naive approach.Wir untersuchen verteilte Systeme, mit besonderem Augenmerk auf Graphenprobleme und Fehlertoleranz. Fehlertoleranz auf einem System-on-Chip (SoC) kann durch eine fehlertolerante Puls- Weiterleitung verbessert werden. Das bestehende Puls-Weiterleitungs-System TRIX toleriert Fehler indem es ein verteiltes System ist das nur aus sehr einfachen Knoten besteht. Wir zeigen dass selbst im typischen, fehlerfreien Fall TRIX sich weitaus besser verhält als man naiverweise erwarten würde: Statistische Analysen unserer simulierten Experimente zeigen, dass der Verzögerungs-Unterschied eine Standardabweichung von lediglich O(log logH) erreicht, wobei H die Höhe des TRIX-Netzes ist. Das Generalisieren einiger klassischer Graphen-Probleme auf Distanz r kann uns neue Erkenntnisse bescheren über den Zusammenhang zwischen Distanz und Komplexität eines Problems. Für das Problem der dominierenden Mengen auf Distanz r zeigen wir sowohl eine algorithmische obere Schranke als auch eine bedingungsfreie untere Schranke für jede Klasse von Graphen, die bestimmte Eigenschaften an Umfang und Dichte erfüllt. Konkret erreicht unser Algorithmus in Zeit O(r) eine Annäherungsgüte von O(r · f(r)). Für konstante r bedeutet das, dass der Algorithmus in konstanter Zeit eine Annäherung konstanter Güte erreicht. Weiterhin zeigen wir, dass kein Algorithmus in Zeit O(r) eine Annäherungsgüte besser als 2r + 1 erreichen kann, nicht einmal in der Klasse der Kreis-Graphen von Umfang mindestens 5r. Weiterhin haben wir das Paketweiterleitungs-Problem untersucht, welches sich mit der Frage beschäftigt, wann genau Pakete in einem verteilten System idealerweise weitergeleitetwerden sollten. Die Paketewerden dabei von einem Gegenspieler eingefügt. Wir bauen auf dem existierenden Algorithmus OED auf, um mehr als ein Paket-Ziel beliefern zu können. Dadurch zeigen wir, dass Paket-Speicher der Größe O(log n) für dieses Problem ausreichen, im Gegensatz zu den Paket-Speichern der Größe O(n) die für einen naiven Ansatz nötig wären

    Survey of Distributed Decision

    Get PDF
    We survey the recent distributed computing literature on checking whether a given distributed system configuration satisfies a given boolean predicate, i.e., whether the configuration is legal or illegal w.r.t. that predicate. We consider classical distributed computing environments, including mostly synchronous fault-free network computing (LOCAL and CONGEST models), but also asynchronous crash-prone shared-memory computing (WAIT-FREE model), and mobile computing (FSYNC model)

    Optimal Dynamic Distributed MIS

    Full text link
    Finding a maximal independent set (MIS) in a graph is a cornerstone task in distributed computing. The local nature of an MIS allows for fast solutions in a static distributed setting, which are logarithmic in the number of nodes or in their degrees. The result trivially applies for the dynamic distributed model, in which edges or nodes may be inserted or deleted. In this paper, we take a different approach which exploits locality to the extreme, and show how to update an MIS in a dynamic distributed setting, either \emph{synchronous} or \emph{asynchronous}, with only \emph{a single adjustment} and in a single round, in expectation. These strong guarantees hold for the \emph{complete fully dynamic} setting: Insertions and deletions, of edges as well as nodes, gracefully and abruptly. This strongly separates the static and dynamic distributed models, as super-constant lower bounds exist for computing an MIS in the former. Our results are obtained by a novel analysis of the surprisingly simple solution of carefully simulating the greedy \emph{sequential} MIS algorithm with a random ordering of the nodes. As such, our algorithm has a direct application as a 33-approximation algorithm for correlation clustering. This adds to the important toolbox of distributed graph decompositions, which are widely used as crucial building blocks in distributed computing. Finally, our algorithm enjoys a useful \emph{history-independence} property, meaning the output is independent of the history of topology changes that constructed that graph. This means the output cannot be chosen, or even biased, by the adversary in case its goal is to prevent us from optimizing some objective function.Comment: 19 pages including appendix and reference

    Self-stabilizing k-clustering in mobile ad hoc networks

    Full text link
    In this thesis, two silent self-stabilizing asynchronous distributed algorithms are given for constructing a k-clustering of a connected network of processes. These are the first self-stabilizing solutions to this problem. One algorithm, FLOOD, takes O( k) time and uses O(k log n) space per process, while the second algorithm, BFS-MIS-CLSTR, takes O(n) time and uses O(log n) space; where n is the size of the network. Processes have unique IDs, and there is no designated leader. BFS-MIS-CLSTR solves three problems; it elects a leader and constructs a BFS tree for the network, constructs a minimal independent set, and finally a k-clustering. Finding a minimal k-clustering is known to be NP -hard. If the network is a unit disk graph in a plane, BFS-MIS-CLSTR is within a factor of O(7.2552k) of choosing the minimal number of clusters; A lower bound is given, showing that any comparison-based algorithm for the k-clustering problem that takes o( diam) rounds has very bad worst case performance; Keywords: BFS tree construction, K-clustering, leader election, MIS construction, self-stabilization, unit disk graph
    corecore