4,493 research outputs found

    Algorithmic Aspects of a General Modular Decomposition Theory

    Get PDF
    A new general decomposition theory inspired from modular graph decomposition is presented. This helps unifying modular decomposition on different structures, including (but not restricted to) graphs. Moreover, even in the case of graphs, the terminology ``module'' not only captures the classical graph modules but also allows to handle 2-connected components, star-cutsets, and other vertex subsets. The main result is that most of the nice algorithmic tools developed for modular decomposition of graphs still apply efficiently on our generalisation of modules. Besides, when an essential axiom is satisfied, almost all the important properties can be retrieved. For this case, an algorithm given by Ehrenfeucht, Gabow, McConnell and Sullivan 1994 is generalised and yields a very efficient solution to the associated decomposition problem

    Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

    Full text link
    There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller graphs, and the ones for the Hyperlink graph use distributed or external memory. Therefore, it is natural to ask whether we can efficiently solve a broad class of graph problems on this graph in memory. This paper shows that theoretically-efficient parallel graph algorithms can scale to the largest publicly-available graphs using a single machine with a terabyte of RAM, processing them in minutes. We give implementations of theoretically-efficient parallel algorithms for 20 important graph problems. We also present the optimizations and techniques that we used in our implementations, which were crucial in enabling us to process these large graphs quickly. We show that the running times of our implementations outperform existing state-of-the-art implementations on the largest real-world graphs. For many of the problems that we consider, this is the first time they have been solved on graphs at this scale. We have made the implementations developed in this work publicly-available as the Graph-Based Benchmark Suite (GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Computing the Girth of a Planar Graph in Linear Time

    Full text link
    The girth of a graph is the minimum weight of all simple cycles of the graph. We study the problem of determining the girth of an n-node unweighted undirected planar graph. The first non-trivial algorithm for the problem, given by Djidjev, runs in O(n^{5/4} log n) time. Chalermsook, Fakcharoenphol, and Nanongkai reduced the running time to O(n log^2 n). Weimann and Yuster further reduced the running time to O(n log n). In this paper, we solve the problem in O(n) time.Comment: 20 pages, 7 figures, accepted to SIAM Journal on Computin
    corecore