3,012 research outputs found

    Lattice Linear Problems vs Algorithms

    Full text link
    Modelling problems using predicates that induce a partial order among global states was introduced as a way to permit asynchronous execution in multiprocessor systems. A key property of such problems is that the predicate induces one lattice in the state space which guarantees that the execution is correct even if nodes execute with old information about their neighbours. Thus, a compiler that is aware of this property can ignore data dependencies and allow the application to continue its execution with the available data rather than waiting for the most recent one. Unfortunately, many interesting problems do not exhibit lattice linearity. This issue was alleviated with the introduction of eventually lattice linear algorithms. Such algorithms induce a partial order in a subset of the state space even though the problem cannot be defined by a predicate under which the states form a partial order. This paper focuses on analyzing and differentiating between lattice linear problems and algorithms. It also introduces a new class of algorithms called (fully) lattice linear algorithms. A characteristic of these algorithms is that the entire reachable state space is partitioned into one or more lattices and the initial state locks into one of these lattices. Thus, under a few additional constraints, the initial state can uniquely determine the final state. For demonstration, we present lattice linear self-stabilizing algorithms for minimal dominating set and graph colouring problems, and a parallel processing 2-approximation algorithm for vertex cover. The algorithm for minimal dominating set converges in n moves, and that for graph colouring converges in n+2m moves. The algorithm for vertex cover is the first lattice linear approximation algorithm for an NP-Hard problem; it converges in n moves. Some part is cut due to 1920 character limit. Please see the pdf for full abstract.Comment: arXiv admin note: text overlap with arXiv:2209.1470

    Design of Self-Stabilizing Approximation Algorithms via a Primal-Dual Approach

    Get PDF
    Self-stabilization is an important concept in the realm of fault-tolerant distributed computing. In this paper, we propose a new approach that relies on the properties of linear programming duality to obtain self-stabilizing approximation algorithms for distributed graph optimization problems. The power of this new approach is demonstrated by the following results: - A self-stabilizing 2(1+?)-approximation algorithm for minimum weight vertex cover that converges in O(log? /(?log log ?)) synchronous rounds. - A self-stabilizing ?-approximation algorithm for maximum weight independent set that converges in O(?+log^* n) synchronous rounds. - A self-stabilizing ((2?+1)(1+?))-approximation algorithm for minimum weight dominating set in ?-arboricity graphs that converges in O((log?)/?) synchronous rounds. In all of the above, ? denotes the maximum degree. Our technique improves upon previous results in terms of time complexity while incurring only an additive O(log n) overhead to the message size. In addition, to the best of our knowledge, we provide the first self-stabilizing algorithms for the weighted versions of minimum vertex cover and maximum independent set

    Self-Stabilizing Distributed Cooperative Reset

    Get PDF
    Self-stabilization is a versatile fault-tolerance approach that characterizes the ability of a system to eventually resume a correct behavior after any finite number of transient faults. In this paper, we propose a self-stabilizing reset algorithm working in anonymous networks. This algorithm resets the network in a distributed non-centralized manner, i.e., it is multi-initiator, as each process detecting an inconsistency may initiate a reset. It is also cooperative in the sense that it coordinates concurrent reset executions in order to gain efficiency. Our approach is general since our reset algorithm allows to build self-stabilizing solutions for various problems and settings. As a matter of facts, we show that it applies to both static and dynamic specifications since we propose efficient self-stabilizing reset-based algorithms for the (1-minimal) (f, g)-alliance (a generalization of the dominating set problem) in identified networks and the unison problem in anonymous networks. Notice that these two latter instantiations enhance the state of the art. Indeed, in the former case, our solution is more general than the previous ones, while in the latter case, the complexity of our unison algorithm is better than that of previous solutions of the literature

    Polynomial Silent Self-Stabilizing p-Star Decomposition

    Get PDF
    We present a silent self-stabilizing distributed algorithm computing a maximal p-star decomposition of the underlying communication network. Under the unfair distributed scheduler, the most general scheduler model, the algorithm converges in at most 12∆m + O(m + n) moves, where m is the number of edges, n is the number of nodes, and ∆ is the maximum node degree. Regarding the move complexity, our algorithm outperforms the previously known best algorithm by a factor of ∆. While the round complexity for the previous algorithm was unknown, we show a 5 [n/(p+1)] + 5 bound for our algorithm

    Fast and compact self-stabilizing verification, computation, and fault detection of an MST

    Get PDF
    This paper demonstrates the usefulness of distributed local verification of proofs, as a tool for the design of self-stabilizing algorithms.In particular, it introduces a somewhat generalized notion of distributed local proofs, and utilizes it for improving the time complexity significantly, while maintaining space optimality. As a result, we show that optimizing the memory size carries at most a small cost in terms of time, in the context of Minimum Spanning Tree (MST). That is, we present algorithms that are both time and space efficient for both constructing an MST and for verifying it.This involves several parts that may be considered contributions in themselves.First, we generalize the notion of local proofs, trading off the time complexity for memory efficiency. This adds a dimension to the study of distributed local proofs, which has been gaining attention recently. Specifically, we design a (self-stabilizing) proof labeling scheme which is memory optimal (i.e., O(logn)O(\log n) bits per node), and whose time complexity is O(log2n)O(\log ^2 n) in synchronous networks, or O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, where Δ\Delta is the maximum degree of nodes. This answers an open problem posed by Awerbuch and Varghese (FOCS 1991). We also show that Ω(logn)\Omega(\log n) time is necessary, even in synchronous networks. Another property is that if ff faults occurred, then, within the requireddetection time above, they are detected by some node in the O(flogn)O(f\log n) locality of each of the faults.Second, we show how to enhance a known transformer that makes input/output algorithms self-stabilizing. It now takes as input an efficient construction algorithm and an efficient self-stabilizing proof labeling scheme, and produces an efficient self-stabilizing algorithm. When used for MST, the transformer produces a memory optimal self-stabilizing algorithm, whose time complexity, namely, O(n)O(n), is significantly better even than that of previous algorithms. (The time complexity of previous MST algorithms that used Ω(log2n)\Omega(\log^2 n) memory bits per node was O(n2)O(n^2), and the time for optimal space algorithms was O(nE)O(n|E|).) Inherited from our proof labelling scheme, our self-stabilising MST construction algorithm also has the following two properties: (1) if faults occur after the construction ended, then they are detected by some nodes within O(log2n)O(\log ^2 n) time in synchronous networks, or within O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, and (2) if ff faults occurred, then, within the required detection time above, they are detected within the O(flogn)O(f\log n) locality of each of the faults. We also show how to improve the above two properties, at the expense of some increase in the memory
    corecore