7,975 research outputs found

    Fast and compact self-stabilizing verification, computation, and fault detection of an MST

    Get PDF
    This paper demonstrates the usefulness of distributed local verification of proofs, as a tool for the design of self-stabilizing algorithms.In particular, it introduces a somewhat generalized notion of distributed local proofs, and utilizes it for improving the time complexity significantly, while maintaining space optimality. As a result, we show that optimizing the memory size carries at most a small cost in terms of time, in the context of Minimum Spanning Tree (MST). That is, we present algorithms that are both time and space efficient for both constructing an MST and for verifying it.This involves several parts that may be considered contributions in themselves.First, we generalize the notion of local proofs, trading off the time complexity for memory efficiency. This adds a dimension to the study of distributed local proofs, which has been gaining attention recently. Specifically, we design a (self-stabilizing) proof labeling scheme which is memory optimal (i.e., O(logn)O(\log n) bits per node), and whose time complexity is O(log2n)O(\log ^2 n) in synchronous networks, or O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, where Δ\Delta is the maximum degree of nodes. This answers an open problem posed by Awerbuch and Varghese (FOCS 1991). We also show that Ω(logn)\Omega(\log n) time is necessary, even in synchronous networks. Another property is that if ff faults occurred, then, within the requireddetection time above, they are detected by some node in the O(flogn)O(f\log n) locality of each of the faults.Second, we show how to enhance a known transformer that makes input/output algorithms self-stabilizing. It now takes as input an efficient construction algorithm and an efficient self-stabilizing proof labeling scheme, and produces an efficient self-stabilizing algorithm. When used for MST, the transformer produces a memory optimal self-stabilizing algorithm, whose time complexity, namely, O(n)O(n), is significantly better even than that of previous algorithms. (The time complexity of previous MST algorithms that used Ω(log2n)\Omega(\log^2 n) memory bits per node was O(n2)O(n^2), and the time for optimal space algorithms was O(nE)O(n|E|).) Inherited from our proof labelling scheme, our self-stabilising MST construction algorithm also has the following two properties: (1) if faults occur after the construction ended, then they are detected by some nodes within O(log2n)O(\log ^2 n) time in synchronous networks, or within O(Δlog3n)O(\Delta \log ^3 n) time in asynchronous ones, and (2) if ff faults occurred, then, within the required detection time above, they are detected within the O(flogn)O(f\log n) locality of each of the faults. We also show how to improve the above two properties, at the expense of some increase in the memory

    Finding 3-edge-connected components in parallel

    Get PDF
    A parallel algorithm for finding 3-edge-connected components of an undirected graph on a CRCW PRAM is presented. The time and work complexity of this algorithm is O(logn) and O((m+n)loglogn), respectively, where n is the number of vertices and m is the number of edges in the input graph. The algorithm is based on ear decomposition and reduction of 3-edge-connectivity to 1-vertex-connectivity. This is the first 3-edge-connected component algorithm of a parallel model

    2-Edge-Connectivity and 2-Vertex-Connectivity with Fault Containment

    Get PDF
    Self-stabilization for non-masking fault-tolerant distributed system has received considerable research interest over the last decade. In this paper, we propose a self-stabilizing algorithm for 2-edge-connectivity and 2-vertex-connectivity of an asynchronous distributed computer network. It is based on a self-stabilizing depth-first search, and is not a composite algorithm in the sense that it is not composed of a number of self-stabilizing algorithms that run concurrently. The time and space complexities of the algorithm are the same as those of the underlying self-stabilizing depth-first search algorithm

    Near Linear Performance in an Articulated Body Solver with Implicit Elasticity

    Get PDF
    The dynamics of articulated rigid bodies can be solved in O(n) time using a recursive method. When elasticity is added between the bodies, with linearly implicit integration, the stiffness matrix in the equations of motion breaks the tree topology of the system, making the recursive method inapplicable. The only alternative has been to form and solve the system matrix, which takes O(n^ 3) time. A new approach that can solve the linearly implicit equations of motion in near linear time, coined REDMAX, is built using a combined reduced/maximal coordinate formulation. This hybrid model enables direct flexibility to apply arbitrary combinations of forces in both reduced and maximal coordinates, while maintaining near linear performance in the number of bodies

    Near Linear Performance in an Articulated Body Solver with Implicit Elasticity

    Get PDF
    The dynamics of articulated rigid bodies can be solved in O(n) time using a recursive method. When elasticity is added between the bodies, with linearly implicit integration, the stiffness matrix in the equations of motion breaks the tree topology of the system, making the recursive method inapplicable. The only alternative has been to form and solve the system matrix, which takes O(n^ 3) time. A new approach that can solve the linearly implicit equations of motion in near linear time, coined REDMAX, is built using a combined reduced/maximal coordinate formulation. This hybrid model enables direct flexibility to apply arbitrary combinations of forces in both reduced and maximal coordinates, while maintaining near linear performance in the number of bodies
    corecore