10,640 research outputs found

    Analyzing Traffic Problem Model With Graph Theory Algorithms

    Full text link
    This paper will contribute to a practical problem, Urban Traffic. We will investigate those features, try to simplify the complexity and formulize this dynamic system. These contents mainly contain how to analyze a decision problem with combinatorial method and graph theory algorithms; how to optimize our strategy to gain a feasible solution through employing other principles of Computer Science.Comment: 7 pages, 5 figures, Science and Information Conference (SAI), 201

    Shortest Distances as Enumeration Problem

    Full text link
    We investigate the single source shortest distance (SSSD) and all pairs shortest distance (APSD) problems as enumeration problems (on unweighted and integer weighted graphs), meaning that the elements (u,v,d(u,v))(u, v, d(u, v)) -- where uu and vv are vertices with shortest distance d(u,v)d(u, v) -- are produced and listed one by one without repetition. The performance is measured in the RAM model of computation with respect to preprocessing time and delay, i.e., the maximum time that elapses between two consecutive outputs. This point of view reveals that specific types of output (e.g., excluding the non-reachable pairs (u,v,∞)(u, v, \infty), or excluding the self-distances (u,u,0)(u, u, 0)) and the order of enumeration (e.g., sorted by distance, sorted row-wise with respect to the distance matrix) have a huge impact on the complexity of APSD while they appear to have no effect on SSSD. In particular, we show for APSD that enumeration without output restrictions is possible with delay in the order of the average degree. Excluding non-reachable pairs, or requesting the output to be sorted by distance, increases this delay to the order of the maximum degree. Further, for weighted graphs, a delay in the order of the average degree is also not possible without preprocessing or considering self-distances as output. In contrast, for SSSD we find that a delay in the order of the maximum degree without preprocessing is attainable and unavoidable for any of these requirements.Comment: Updated version adds the study of space complexit

    GraphBLAST: A High-Performance Linear Algebra-based Graph Framework on the GPU

    Full text link
    High-performance implementations of graph algorithms are challenging to implement on new parallel hardware such as GPUs because of three challenges: (1) the difficulty of coming up with graph building blocks, (2) load imbalance on parallel hardware, and (3) graph problems having low arithmetic intensity. To address some of these challenges, GraphBLAS is an innovative, on-going effort by the graph analytics community to propose building blocks based on sparse linear algebra, which will allow graph algorithms to be expressed in a performant, succinct, composable and portable manner. In this paper, we examine the performance challenges of a linear-algebra-based approach to building graph frameworks and describe new design principles for overcoming these bottlenecks. Among the new design principles is exploiting input sparsity, which allows users to write graph algorithms without specifying push and pull direction. Exploiting output sparsity allows users to tell the backend which values of the output in a single vectorized computation they do not want computed. Load-balancing is an important feature for balancing work amongst parallel workers. We describe the important load-balancing features for handling graphs with different characteristics. The design principles described in this paper have been implemented in "GraphBLAST", the first high-performance linear algebra-based graph framework on NVIDIA GPUs that is open-source. The results show that on a single GPU, GraphBLAST has on average at least an order of magnitude speedup over previous GraphBLAS implementations SuiteSparse and GBTL, comparable performance to the fastest GPU hardwired primitives and shared-memory graph frameworks Ligra and Gunrock, and better performance than any other GPU graph framework, while offering a simpler and more concise programming model.Comment: 50 pages, 14 figures, 14 table

    Dynamic Complexity of Planar 3-connected Graph Isomorphism

    Full text link
    Dynamic Complexity (as introduced by Patnaik and Immerman) tries to express how hard it is to update the solution to a problem when the input is changed slightly. It considers the changes required to some stored data structure (possibly a massive database) as small quantities of data (or a tuple) are inserted or deleted from the database (or a structure over some vocabulary). The main difference from previous notions of dynamic complexity is that instead of treating the update quantitatively by finding the the time/space trade-offs, it tries to consider the update qualitatively, by finding the complexity class in which the update can be expressed (or made). In this setting, DynFO, or Dynamic First-Order, is one of the smallest and the most natural complexity class (since SQL queries can be expressed in First-Order Logic), and contains those problems whose solutions (or the stored data structure from which the solution can be found) can be updated in First-Order Logic when the data structure undergoes small changes. Etessami considered the problem of isomorphism in the dynamic setting, and showed that Tree Isomorphism can be decided in DynFO. In this work, we show that isomorphism of Planar 3-connected graphs can be decided in DynFO+ (which is DynFO with some polynomial precomputation). We maintain a canonical description of 3-connected Planar graphs by maintaining a database which is accessed and modified by First-Order queries when edges are added to or deleted from the graph. We specifically exploit the ideas of Breadth-First Search and Canonical Breadth-First Search to prove the results. We also introduce a novel method for canonizing a 3-connected planar graph in First-Order Logic from Canonical Breadth-First Search Trees

    Fully Dynamic Single-Source Reachability in Practice: An Experimental Study

    Full text link
    Given a directed graph and a source vertex, the fully dynamic single-source reachability problem is to maintain the set of vertices that are reachable from the given vertex, subject to edge deletions and insertions. It is one of the most fundamental problems on graphs and appears directly or indirectly in many and varied applications. While there has been theoretical work on this problem, showing both linear conditional lower bounds for the fully dynamic problem and insertions-only and deletions-only upper bounds beating these conditional lower bounds, there has been no experimental study that compares the performance of fully dynamic reachability algorithms in practice. Previous experimental studies in this area concentrated only on the more general all-pairs reachability or transitive closure problem and did not use real-world dynamic graphs. In this paper, we bridge this gap by empirically studying an extensive set of algorithms for the single-source reachability problem in the fully dynamic setting. In particular, we design several fully dynamic variants of well-known approaches to obtain and maintain reachability information with respect to a distinguished source. Moreover, we extend the existing insertions-only or deletions-only upper bounds into fully dynamic algorithms. Even though the worst-case time per operation of all the fully dynamic algorithms we evaluate is at least linear in the number of edges in the graph (as is to be expected given the conditional lower bounds) we show in our extensive experimental evaluation that their performance differs greatly, both on generated as well as on real-world instances

    Theory and Techniques for Synthesizing a Family of Graph Algorithms

    Full text link
    Although Breadth-First Search (BFS) has several advantages over Depth-First Search (DFS) its prohibitive space requirements have meant that algorithm designers often pass it over in favor of DFS. To address this shortcoming, we introduce a theory of Efficient BFS (EBFS) along with a simple recursive program schema for carrying out the search. The theory is based on dominance relations, a long standing technique from the field of search algorithms. We show how the theory can be used to systematically derive solutions to two graph algorithms, namely the Single Source Shortest Path problem and the Minimum Spanning Tree problem. The solutions are found by making small systematic changes to the derivation, revealing the connections between the two problems which are often obscured in textbook presentations of them.Comment: In Proceedings SYNT 2012, arXiv:1207.055

    Gunrock: A High-Performance Graph Processing Library on the GPU

    Full text link
    For large-scale graph analytics on the GPU, the irregularity of data access and control flow, and the complexity of programming GPUs have been two significant challenges for developing a programmable high-performance graph library. "Gunrock", our graph-processing system designed specifically for the GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on operations on a vertex or edge frontier. Gunrock achieves a balance between performance and expressiveness by coupling high performance GPU computing primitives and optimization strategies with a high-level programming model that allows programmers to quickly develop new graph primitives with small code size and minimal GPU programming knowledge. We evaluate Gunrock on five key graph primitives and show that Gunrock has on average at least an order of magnitude speedup over Boost and PowerGraph, comparable performance to the fastest GPU hardwired primitives, and better performance than any other GPU high-level graph library.Comment: 14 pages, accepted by PPoPP'16 (removed the text repetition in the previous version v5
    • …
    corecore