22,955 research outputs found

    Distributed Connectivity Decomposition

    Full text link
    We present time-efficient distributed algorithms for decomposing graphs with large edge or vertex connectivity into multiple spanning or dominating trees, respectively. As their primary applications, these decompositions allow us to achieve information flow with size close to the connectivity by parallelizing it along the trees. More specifically, our distributed decomposition algorithms are as follows: (I) A decomposition of each undirected graph with vertex-connectivity kk into (fractionally) vertex-disjoint weighted dominating trees with total weight Ω(klogn)\Omega(\frac{k}{\log n}), in O~(D+n)\widetilde{O}(D+\sqrt{n}) rounds. (II) A decomposition of each undirected graph with edge-connectivity λ\lambda into (fractionally) edge-disjoint weighted spanning trees with total weight λ12(1ε)\lceil\frac{\lambda-1}{2}\rceil(1-\varepsilon), in O~(D+nλ)\widetilde{O}(D+\sqrt{n\lambda}) rounds. We also show round complexity lower bounds of Ω~(D+nk)\tilde{\Omega}(D+\sqrt{\frac{n}{k}}) and Ω~(D+nλ)\tilde{\Omega}(D+\sqrt{\frac{n}{\lambda}}) for the above two decompositions, using techniques of [Das Sarma et al., STOC'11]. Moreover, our vertex-connectivity decomposition extends to centralized algorithms and improves the time complexity of [Censor-Hillel et al., SODA'14] from O(n3)O(n^3) to near-optimal O~(m)\tilde{O}(m). As corollaries, we also get distributed oblivious routing broadcast with O(1)O(1)-competitive edge-congestion and O(logn)O(\log n)-competitive vertex-congestion. Furthermore, the vertex connectivity decomposition leads to near-time-optimal O(logn)O(\log n)-approximation of vertex connectivity: centralized O~(m)\widetilde{O}(m) and distributed O~(D+n)\tilde{O}(D+\sqrt{n}). The former moves toward the 1974 conjecture of Aho, Hopcroft, and Ullman postulating an O(m)O(m) centralized exact algorithm while the latter is the first distributed vertex connectivity approximation

    2-Vertex Connectivity in Directed Graphs

    Full text link
    We complement our study of 2-connectivity in directed graphs, by considering the computation of the following 2-vertex-connectivity relations: We say that two vertices v and w are 2-vertex-connected if there are two internally vertex-disjoint paths from v to w and two internally vertex-disjoint paths from w to v. We also say that v and w are vertex-resilient if the removal of any vertex different from v and w leaves v and w in the same strongly connected component. We show how to compute the above relations in linear time so that we can report in constant time if two vertices are 2-vertex-connected or if they are vertex-resilient. We also show how to compute in linear time a sparse certificate for these relations, i.e., a subgraph of the input graph that has O(n) edges and maintains the same 2-vertex-connectivity and vertex-resilience relations as the input graph, where n is the number of vertices.Comment: arXiv admin note: substantial text overlap with arXiv:1407.304

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Strong Connectivity in Directed Graphs under Failures, with Application

    Full text link
    In this paper, we investigate some basic connectivity problems in directed graphs (digraphs). Let GG be a digraph with mm edges and nn vertices, and let GeG\setminus e be the digraph obtained after deleting edge ee from GG. As a first result, we show how to compute in O(m+n)O(m+n) worst-case time: (i)(i) The total number of strongly connected components in GeG\setminus e, for all edges ee in GG. (ii)(ii) The size of the largest and of the smallest strongly connected components in GeG\setminus e, for all edges ee in GG. Let GG be strongly connected. We say that edge ee separates two vertices xx and yy, if xx and yy are no longer strongly connected in GeG\setminus e. As a second set of results, we show how to build in O(m+n)O(m+n) time O(n)O(n)-space data structures that can answer in optimal time the following basic connectivity queries on digraphs: (i)(i) Report in O(n)O(n) worst-case time all the strongly connected components of GeG\setminus e, for a query edge ee. (ii)(ii) Test whether an edge separates two query vertices in O(1)O(1) worst-case time. (iii)(iii) Report all edges that separate two query vertices in optimal worst-case time, i.e., in time O(k)O(k), where kk is the number of separating edges. (For k=0k=0, the time is O(1)O(1)). All of the above results extend to vertex failures. All our bounds are tight and are obtained with a common algorithmic framework, based on a novel compact representation of the decompositions induced by the 11-connectivity (i.e., 11-edge and 11-vertex) cuts in digraphs, which might be of independent interest. With the help of our data structures we can design efficient algorithms for several other connectivity problems on digraphs and we can also obtain in linear time a strongly connected spanning subgraph of GG with O(n)O(n) edges that maintains the 11-connectivity cuts of GG and the decompositions induced by those cuts.Comment: An extended abstract of this work appeared in the SODA 201

    Contractions, Removals and How to Certify 3-Connectivity in Linear Time

    Get PDF
    It is well-known as an existence result that every 3-connected graph G=(V,E) on more than 4 vertices admits a sequence of contractions and a sequence of removal operations to K_4 such that every intermediate graph is 3-connected. We show that both sequences can be computed in optimal time, improving the previously best known running times of O(|V|^2) to O(|V|+|E|). This settles also the open question of finding a linear time 3-connectivity test that is certifying and extends to a certifying 3-edge-connectivity test in the same time. The certificates used are easy to verify in time O(|E|).Comment: preliminary versio

    A Planarity Test via Construction Sequences

    Full text link
    Optimal linear-time algorithms for testing the planarity of a graph are well-known for over 35 years. However, these algorithms are quite involved and recent publications still try to give simpler linear-time tests. We give a simple reduction from planarity testing to the problem of computing a certain construction of a 3-connected graph. The approach is different from previous planarity tests; as key concept, we maintain a planar embedding that is 3-connected at each point in time. The algorithm runs in linear time and computes a planar embedding if the input graph is planar and a Kuratowski-subdivision otherwise
    corecore