6,498 research outputs found

    Parallel Batch-Dynamic Graph Connectivity

    Full text link
    In this paper, we study batch parallel algorithms for the dynamic connectivity problem, a fundamental problem that has received considerable attention in the sequential setting. The most well known sequential algorithm for dynamic connectivity is the elegant level-set algorithm of Holm, de Lichtenberg and Thorup (HDT), which achieves O(log2n)O(\log^2 n) amortized time per edge insertion or deletion, and O(logn/loglogn)O(\log n / \log\log n) time per query. We design a parallel batch-dynamic connectivity algorithm that is work-efficient with respect to the HDT algorithm for small batch sizes, and is asymptotically faster when the average batch size is sufficiently large. Given a sequence of batched updates, where Δ\Delta is the average batch size of all deletions, our algorithm achieves O(lognlog(1+n/Δ))O(\log n \log(1 + n / \Delta)) expected amortized work per edge insertion and deletion and O(log3n)O(\log^3 n) depth w.h.p. Our algorithm answers a batch of kk connectivity queries in O(klog(1+n/k))O(k \log(1 + n/k)) expected work and O(logn)O(\log n) depth w.h.p. To the best of our knowledge, our algorithm is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Implicit Decomposition for Write-Efficient Connectivity Algorithms

    Full text link
    The future of main memory appears to lie in the direction of new technologies that provide strong capacity-to-performance ratios, but have write operations that are much more expensive than reads in terms of latency, bandwidth, and energy. Motivated by this trend, we propose sequential and parallel algorithms to solve graph connectivity problems using significantly fewer writes than conventional algorithms. Our primary algorithmic tool is the construction of an o(n)o(n)-sized "implicit decomposition" of a bounded-degree graph GG on nn nodes, which combined with read-only access to GG enables fast answers to connectivity and biconnectivity queries on GG. The construction breaks the linear-write "barrier", resulting in costs that are asymptotically lower than conventional algorithms while adding only a modest cost to querying time. For general non-sparse graphs on mm edges, we also provide the first o(m)o(m) writes and O(m)O(m) operations parallel algorithms for connectivity and biconnectivity. These algorithms provide insight into how applications can efficiently process computations on large graphs in systems with read-write asymmetry

    Homological Region Adjacency Tree for a 3D Binary Digital Image via HSF Model

    Get PDF
    Given a 3D binary digital image I, we define and compute an edge-weighted tree, called Homological Region Tree (or Hom-Tree, for short). It coincides, as unweighted graph, with the classical Region Adjacency Tree of black 6-connected components (CCs) and white 26- connected components of I. In addition, we define the weight of an edge (R, S) as the number of tunnels that the CCs R and S “share”. The Hom-Tree structure is still an isotopic invariant of I. Thus, it provides information about how the different homology groups interact between them, while preserving the duality of black and white CCs. An experimentation with a set of synthetic images showing different shapes and different complexity of connected component nesting is performed for numerically validating the method.Ministerio de Economía y Competitividad MTM2016-81030-

    Dynamic Algorithms for the Massively Parallel Computation Model

    Get PDF
    The Massive Parallel Computing (MPC) model gained popularity during the last decade and it is now seen as the standard model for processing large scale data. One significant shortcoming of the model is that it assumes to work on static datasets while, in practice, real-world datasets evolve continuously. To overcome this issue, in this paper we initiate the study of dynamic algorithms in the MPC model. We first discuss the main requirements for a dynamic parallel model and we show how to adapt the classic MPC model to capture them. Then we analyze the connection between classic dynamic algorithms and dynamic algorithms in the MPC model. Finally, we provide new efficient dynamic MPC algorithms for a variety of fundamental graph problems, including connectivity, minimum spanning tree and matching.Comment: Accepted to the 31st ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2019

    On dynamic breadth-first search in external-memory

    Get PDF
    We provide the first non-trivial result on dynamic breadth-first search (BFS) in external-memory: For general sparse undirected graphs of initially nn nodes and O(n) edges and monotone update sequences of either Θ(n)\Theta(n) edge insertions or Θ(n)\Theta(n) edge deletions, we prove an amortized high-probability bound of O(n/B^{2/3}+\sort(n)\cdot \log B) I/Os per update. In contrast, the currently best approach for static BFS on sparse undirected graphs requires \Omega(n/B^{1/2}+\sort(n)) I/Os. 1998 ACM Subject Classification: F.2.2. Key words and phrases: External Memory, Dynamic Graph Algorithms, BFS, Randomization

    Generating Second Order (Co)homological Information within AT-Model Context

    Get PDF
    In this paper we design a new family of relations between (co)homology classes, working with coefficients in a field and starting from an AT-model (Algebraic Topological Model) AT(C) of a finite cell complex C These relations are induced by elementary relations of type “to be in the (co)boundary of” between cells. This high-order connectivity information is embedded into a graph-based representation model, called Second Order AT-Region-Incidence Graph (or AT-RIG) of C. This graph, having as nodes the different homology classes of C, is in turn, computed from two generalized abstract cell complexes, called primal and dual AT-segmentations of C. The respective cells of these two complexes are connected regions (set of cells) of the original cell complex C, which are specified by the integral operator of AT(C). In this work in progress, we successfully use this model (a) in experiments for discriminating topologically different 3D digital objects, having the same Euler characteristic and (b) in designing a parallel algorithm for computing potentially significant (co)homological information of 3D digital objects.Ministerio de Economía y Competitividad MTM2016-81030-PMinisterio de Economía y Competitividad TEC2012-37868-C04-0

    Connectivity Oracles for Graphs Subject to Vertex Failures

    Full text link
    We introduce new data structures for answering connectivity queries in graphs subject to batched vertex failures. A deterministic structure processes a batch of ddd\leq d_{\star} failed vertices in O~(d3)\tilde{O}(d^3) time and thereafter answers connectivity queries in O(d)O(d) time. It occupies space O(dmlogn)O(d_{\star} m\log n). We develop a randomized Monte Carlo version of our data structure with update time O~(d2)\tilde{O}(d^2), query time O(d)O(d), and space O~(m)\tilde{O}(m) for any failure bound dnd\le n. This is the first connectivity oracle for general graphs that can efficiently deal with an unbounded number of vertex failures. We also develop a more efficient Monte Carlo edge-failure connectivity oracle. Using space O(nlog2n)O(n\log^2 n), dd edge failures are processed in O(dlogdloglogn)O(d\log d\log\log n) time and thereafter, connectivity queries are answered in O(loglogn)O(\log\log n) time, which are correct w.h.p. Our data structures are based on a new decomposition theorem for an undirected graph G=(V,E)G=(V,E), which is of independent interest. It states that for any terminal set UVU\subseteq V we can remove a set BB of U/(s2)|U|/(s-2) vertices such that the remaining graph contains a Steiner forest for UBU-B with maximum degree ss

    Geodesics in Heat

    Full text link
    We introduce the heat method for computing the shortest geodesic distance to a specified subset (e.g., point or curve) of a given domain. The heat method is robust, efficient, and simple to implement since it is based on solving a pair of standard linear elliptic problems. The method represents a significant breakthrough in the practical computation of distance on a wide variety of geometric domains, since the resulting linear systems can be prefactored once and subsequently solved in near-linear time. In practice, distance can be updated via the heat method an order of magnitude faster than with state-of-the-art methods while maintaining a comparable level of accuracy. We provide numerical evidence that the method converges to the exact geodesic distance in the limit of refinement; we also explore smoothed approximations of distance suitable for applications where more regularity is required
    corecore