46 research outputs found
Polynomial Silent Self-Stabilizing p-Star Decomposition
We present a silent self-stabilizing distributed algorithm computing a maximal p-star decomposition of the underlying communication network. Under the unfair distributed scheduler, the most general scheduler model, the algorithm converges in at most 12∆m + O(m + n) moves, where m is the number of edges, n is the number of nodes, and ∆ is the maximum node degree. Regarding the move complexity, our algorithm outperforms the previously known best algorithm by a factor of ∆. While the round complexity for the previous algorithm was unknown, we show a 5 [n/(p+1)] + 5 bound for our algorithm
Design of Self-Stabilizing Approximation Algorithms via a Primal-Dual Approach
Self-stabilization is an important concept in the realm of fault-tolerant distributed computing. In this paper, we propose a new approach that relies on the properties of linear programming duality to obtain self-stabilizing approximation algorithms for distributed graph optimization problems. The power of this new approach is demonstrated by the following results:
- A self-stabilizing 2(1+?)-approximation algorithm for minimum weight vertex cover that converges in O(log? /(?log log ?)) synchronous rounds.
- A self-stabilizing ?-approximation algorithm for maximum weight independent set that converges in O(?+log^* n) synchronous rounds.
- A self-stabilizing ((2?+1)(1+?))-approximation algorithm for minimum weight dominating set in ?-arboricity graphs that converges in O((log?)/?) synchronous rounds. In all of the above, ? denotes the maximum degree. Our technique improves upon previous results in terms of time complexity while incurring only an additive O(log n) overhead to the message size. In addition, to the best of our knowledge, we provide the first self-stabilizing algorithms for the weighted versions of minimum vertex cover and maximum independent set
On the Limits and Practice of Automatically Designing Self-Stabilization
A protocol is said to be self-stabilizing when the distributed system executing it is guaranteed to recover from any fault that does not cause permanent damage. Designing such protocols is hard since they must recover from all possible states, therefore we investigate how feasible it is to synthesize them automatically. We show that synthesizing stabilization on a fixed topology is NP-complete in the number of system states. When a solution is found, we further show that verifying its correctness on a general topology (with any number of processes) is undecidable, even for very simple unidirectional rings. Despite these negative results, we develop an algorithm to synthesize a self-stabilizing protocol given its desired topology, legitimate states, and behavior. By analogy to shadow puppetry, where a puppeteer may design a complex puppet to cast a desired shadow, a protocol may need to be designed in a complex way that does not even resemble its specification. Our shadow/puppet synthesis algorithm addresses this concern and, using a complete backtracking search, has automatically designed 4 new self-stabilizing protocols with minimal process space requirements: 2-state maximal matching on bidirectional rings, 5-state token passing on unidirectional rings, 3-state token passing on bidirectional chains, and 4-state orientation on daisy chains
Lattice Linear Problems vs Algorithms
Modelling problems using predicates that induce a partial order among global
states was introduced as a way to permit asynchronous execution in
multiprocessor systems. A key property of such problems is that the predicate
induces one lattice in the state space which guarantees that the execution is
correct even if nodes execute with old information about their neighbours.
Thus, a compiler that is aware of this property can ignore data dependencies
and allow the application to continue its execution with the available data
rather than waiting for the most recent one. Unfortunately, many interesting
problems do not exhibit lattice linearity. This issue was alleviated with the
introduction of eventually lattice linear algorithms. Such algorithms induce a
partial order in a subset of the state space even though the problem cannot be
defined by a predicate under which the states form a partial order.
This paper focuses on analyzing and differentiating between lattice linear
problems and algorithms. It also introduces a new class of algorithms called
(fully) lattice linear algorithms. A characteristic of these algorithms is that
the entire reachable state space is partitioned into one or more lattices and
the initial state locks into one of these lattices. Thus, under a few
additional constraints, the initial state can uniquely determine the final
state. For demonstration, we present lattice linear self-stabilizing algorithms
for minimal dominating set and graph colouring problems, and a parallel
processing 2-approximation algorithm for vertex cover.
The algorithm for minimal dominating set converges in n moves, and that for
graph colouring converges in n+2m moves. The algorithm for vertex cover is the
first lattice linear approximation algorithm for an NP-Hard problem; it
converges in n moves.
Some part is cut due to 1920 character limit. Please see the pdf for full
abstract.Comment: arXiv admin note: text overlap with arXiv:2209.1470
Making Self-Stabilizing any Locally Greedy Problem
We propose a way to transform synchronous distributed algorithms solving
locally greedy and mendable problems into self-stabilizing algorithms in
anonymous networks. Mendable problems are a generalization of greedy problems
where any partial solution may be transformed -- instead of completed -- into a
global solution: every time we extend the partial solution we are allowed to
change the previous partial solution up to a given distance. Locally here means
that to extend a solution for a node, we need to look at a constant distance
from it. In order to do this, we propose the first explicit self-stabilizing
algorithm computing a -ruling set (i.e. a "maximal independent set at
distance "). By combining multiple time this technique, we compute a
distance- coloring of the graph. With this coloring we can finally simulate
\local~model algorithms running in a constant number of rounds, using the
colors as unique identifiers. Our algorithms work under the Gouda daemon, which
is similar to the probabilistic daemon: if an event should eventually happen,
it will occur under this daemon
Self-Stabilizing Distributed Cooperative Reset
Self-stabilization is a versatile fault-tolerance approach that characterizes the ability of a system to eventually resume a correct behavior after any finite number of transient faults. In this paper, we propose a self-stabilizing reset algorithm working in anonymous networks. This algorithm resets the network in a distributed non-centralized manner, i.e., it is multi-initiator, as each process detecting an inconsistency may initiate a reset. It is also cooperative in the sense that it coordinates concurrent reset executions in order to gain efficiency. Our approach is general since our reset algorithm allows to build self-stabilizing solutions for various problems and settings. As a matter of facts, we show that it applies to both static and dynamic specifications since we propose efficient self-stabilizing reset-based algorithms for the (1-minimal) (f, g)-alliance (a generalization of the dominating set problem) in identified networks and the unison problem in anonymous networks. Notice that these two latter instantiations enhance the state of the art. Indeed, in the former case, our solution is more general than the previous ones, while in the latter case, the complexity of our unison algorithm is better than that of previous solutions of the literature
Optimal Dynamic Distributed MIS
Finding a maximal independent set (MIS) in a graph is a cornerstone task in
distributed computing. The local nature of an MIS allows for fast solutions in
a static distributed setting, which are logarithmic in the number of nodes or
in their degrees. The result trivially applies for the dynamic distributed
model, in which edges or nodes may be inserted or deleted. In this paper, we
take a different approach which exploits locality to the extreme, and show how
to update an MIS in a dynamic distributed setting, either \emph{synchronous} or
\emph{asynchronous}, with only \emph{a single adjustment} and in a single
round, in expectation. These strong guarantees hold for the \emph{complete
fully dynamic} setting: Insertions and deletions, of edges as well as nodes,
gracefully and abruptly. This strongly separates the static and dynamic
distributed models, as super-constant lower bounds exist for computing an MIS
in the former.
Our results are obtained by a novel analysis of the surprisingly simple
solution of carefully simulating the greedy \emph{sequential} MIS algorithm
with a random ordering of the nodes. As such, our algorithm has a direct
application as a -approximation algorithm for correlation clustering. This
adds to the important toolbox of distributed graph decompositions, which are
widely used as crucial building blocks in distributed computing.
Finally, our algorithm enjoys a useful \emph{history-independence} property,
meaning the output is independent of the history of topology changes that
constructed that graph. This means the output cannot be chosen, or even biased,
by the adversary in case its goal is to prevent us from optimizing some
objective function.Comment: 19 pages including appendix and reference
Self-Stabilization in the Distributed Systems of Finite State Machines
The notion of self-stabilization was first proposed by Dijkstra in 1974 in his classic paper. The paper defines a system as self-stabilizing if, starting at any, possibly illegitimate, state the system can automatically adjust itself to eventually converge to a legitimate state in finite amount of time and once in a legitimate state it will remain so unless it incurs a subsequent transient fault. Dijkstra limited his attention to a ring of finite-state machines and provided its solution for self-stabilization. In the years following his introduction, very few papers were published in this area. Once his proposal was recognized as a milestone in work on fault tolerance, the notion propagated among the researchers rapidly and many researchers in the distributed systems diverted their attention to it. The investigation and use of self-stabilization as an approach to fault-tolerant behavior under a model of transient failures for distributed systems is now undergoing a renaissance. A good number of works pertaining to self-stabilization in the distributed systems were proposed in the yesteryears most of which are very recent. This report surveys all previous works available in the literature of self-stabilizing systems
Making local algorithms efficiently self-stabilizing in arbitrary asynchronous environments
This paper deals with the trade-off between time, workload, and versatility
in self-stabilization, a general and lightweight fault-tolerant concept in
distributed computing.In this context, we propose a transformer that provides
an asynchronous silent self-stabilizing version Trans(AlgI) of any terminating
synchronous algorithm AlgI. The transformed algorithm Trans(AlgI) works under
the distributed unfair daemon and is efficient both in moves and rounds.Our
transformer allows to easily obtain fully-polynomial silent self-stabilizing
solutions that are also asymptotically optimal in rounds.We illustrate the
efficiency and versatility of our transformer with several efficient (i.e.,
fully-polynomial) silent self-stabilizing instances solving major distributed
computing problems, namely vertex coloring, Breadth-First Search (BFS) spanning
tree construction, k-clustering, and leader election