10 research outputs found

    A Planarity Test via Construction Sequences

    Full text link
    Optimal linear-time algorithms for testing the planarity of a graph are well-known for over 35 years. However, these algorithms are quite involved and recent publications still try to give simpler linear-time tests. We give a simple reduction from planarity testing to the problem of computing a certain construction of a 3-connected graph. The approach is different from previous planarity tests; as key concept, we maintain a planar embedding that is 3-connected at each point in time. The algorithm runs in linear time and computes a planar embedding if the input graph is planar and a Kuratowski-subdivision otherwise

    Kernelization for Finding Lineal Topologies (Depth-First Spanning Trees) with Many or Few Leaves

    Full text link
    For a given graph GG, a depth-first search (DFS) tree TT of GG is an rr-rooted spanning tree such that every edge of GG is either an edge of TT or is between a \textit{descendant} and an \textit{ancestor} in TT. A graph GG together with a DFS tree is called a \textit{lineal topology} T=(G,r,T)\mathcal{T} = (G, r, T). Sam et al. (2023) initiated study of the parameterized complexity of the \textsc{Min-LLT} and \textsc{Max-LLT} problems which ask, given a graph GG and an integer k0k\geq 0, whether GG has a DFS tree with at most kk and at least kk leaves, respectively. Particularly, they showed that for the dual parameterization, where the tasks are to find DFS trees with at least nkn-k and at most nkn-k leaves, respectively, these problems are fixed-parameter tractable when parameterized by kk. However, the proofs were based on Courcelle's theorem, thereby making the running times a tower of exponentials. We prove that both problems admit polynomial kernels with \Oh(k^3) vertices. In particular, this implies FPT algorithms running in k^{\Oh(k)}\cdot n^{O(1)} time. We achieve these results by making use of a \Oh(k)-sized vertex cover structure associated with each problem. This also allows us to demonstrate polynomial kernels for \textsc{Min-LLT} and \textsc{Max-LLT} for the structural parameterization by the vertex cover number.Comment: 16 pages, accepted for presentation at FCT 202

    Detecting Weakly Simple Polygons

    Full text link
    A closed curve in the plane is weakly simple if it is the limit (in the Fr\'echet metric) of a sequence of simple closed curves. We describe an algorithm to determine whether a closed walk of length n in a simple plane graph is weakly simple in O(n log n) time, improving an earlier O(n^3)-time algorithm of Cortese et al. [Discrete Math. 2009]. As an immediate corollary, we obtain the first efficient algorithm to determine whether an arbitrary n-vertex polygon is weakly simple; our algorithm runs in O(n^2 log n) time. We also describe algorithms that detect weak simplicity in O(n log n) time for two interesting classes of polygons. Finally, we discuss subtle errors in several previously published definitions of weak simplicity.Comment: 25 pages and 13 figures, submitted to SODA 201

    Constrained Planarity in Practice -- Engineering the Synchronized Planarity Algorithm

    Full text link
    In the constrained planarity setting, we ask whether a graph admits a planar drawing that additionally satisfies a given set of constraints. These constraints are often derived from very natural problems; prominent examples are Level Planarity, where vertices have to lie on given horizontal lines indicating a hierarchy, and Clustered Planarity, where we additionally draw the boundaries of clusters which recursively group the vertices in a crossing-free manner. Despite receiving significant amount of attention and substantial theoretical progress on these problems, only very few of the found solutions have been put into practice and evaluated experimentally. In this paper, we describe our implementation of the recent quadratic-time algorithm by Bl\"asius et al. [TALG Vol 19, No 4] for solving the problem Synchronized Planarity, which can be seen as a common generalization of several constrained planarity problems, including the aforementioned ones. Our experimental evaluation on an existing benchmark set shows that even our baseline implementation outperforms all competitors by at least an order of magnitude. We systematically investigate the degrees of freedom in the implementation of the Synchronized Planarity algorithm for larger instances and propose several modifications that further improve the performance. Altogether, this allows us to solve instances with up to 100 vertices in milliseconds and instances with up to 100 000 vertices within a few minutes.Comment: to appear in Proceedings of ALENEX 202

    Die boxes, workstations, graph theory and die charts

    Get PDF

    Planarity Variants for Directed Graphs

    Get PDF

    Kreuzungen in Cluster-Level-Graphen

    Get PDF
    Clustered graphs are an enhanced graph model with a recursive clustering of the vertices according to a given nesting relation. This prime technique for expressing coherence of certain parts of the graph is used in many applications, such as biochemical pathways and UML class diagrams. For directed clustered graphs usually level drawings are used, leading to clustered level graphs. In this thesis we analyze the interrelation of clusters and levels and their influence on edge crossings and cluster/edge crossings.Cluster-Graphen sind ein erweitertes Graph-Modell mit einem rekursiven Clustering der Knoten entsprechend einer gegebenen Inklusionsrelation. Diese bedeutende Technik um Zusammengehörigkeit bestimmter Teile des Graphen auszudrücken wird in vielen Anwendungen benutzt, etwa biochemischen Reaktionsnetzen oder UML Klassendiagrammen. Für gerichtete Cluster-Graphen werden üblicherweise Level-Zeichnungen verwendet, was zu Cluster-Level-Graphen führt. Diese Arbeit analysiert den Zusammenhang zwischen Clustern und Level und deren Auswirkungen auf Kantenkreuzungen und Cluster/Kanten-Kreuzungen

    The many faces of planarity : matching, augmentation, and embedding algorithms for planar graphs

    Get PDF

    Automatic determination of information system subsystems development order

    Get PDF
    Prilikom razvoja informacijskog sustava potrebno je odrediti slijed razvoja podsustava informacijskog sustava. Ovaj problem trenutno nije formalno riješen. Stoga predlažemo rješenje koje će kao kriterij, određivanja slijeda razvoja podsustava informacijskog sustava, imati sumu težina povratnih lukova u slijedu podsustava informacijskog sustava. Nadalje, dokazali smo kako je ovaj problem NP-potpun, NP-težak, i APX-težak. Isto tako, kako bismo riješili ovaj problem osmislili smo: algoritam temeljen na metodi Grananja i ograničenja, Monte Carlo randomizirani algoritam, i heuristički algoritam. Za sva tri algoritma smo procijenili složenost. Sva tri algoritma su implementirana i empirijski testirana. Na kraju smo pokazali na koji način se u praksi, po potrebi, mogu uvrštavati dodatna ograničenja, i gdje se još osmišljeni algoritmi potencijalno mogu koristiti.When we are developing Information System we must determine development order of its subsystems. Currently, this problem is not formally solved. Therefore, we have proposed a solution which takes sum of weights of feedback arcs as a criteria for determining development order of Information System subsystems. Furthermore, we have proved that the problem of Information System Subsystems Development Order is NP-complete, NP-hard, and APX-hard. Also, in order to solve this problem we have created: Branch and Bound algorithm, Monte Carlo randomized algorithm, and heuristic algorithm. Complexity has been calculated for all three algorithms. All three algorithms have been implemented and empirically analysed. Lastly, we have showed how one can apply additional constraints upon the problem of Information System Subsystems Development Order, and where can one potentially use developed algorithms outside of Information System Subsystems Development Order problem

    Automatic determination of information system subsystems development order

    Get PDF
    Prilikom razvoja informacijskog sustava potrebno je odrediti slijed razvoja podsustava informacijskog sustava. Ovaj problem trenutno nije formalno riješen. Stoga predlažemo rješenje koje će kao kriterij, određivanja slijeda razvoja podsustava informacijskog sustava, imati sumu težina povratnih lukova u slijedu podsustava informacijskog sustava. Nadalje, dokazali smo kako je ovaj problem NP-potpun, NP-težak, i APX-težak. Isto tako, kako bismo riješili ovaj problem osmislili smo: algoritam temeljen na metodi Grananja i ograničenja, Monte Carlo randomizirani algoritam, i heuristički algoritam. Za sva tri algoritma smo procijenili složenost. Sva tri algoritma su implementirana i empirijski testirana. Na kraju smo pokazali na koji način se u praksi, po potrebi, mogu uvrštavati dodatna ograničenja, i gdje se još osmišljeni algoritmi potencijalno mogu koristiti.When we are developing Information System we must determine development order of its subsystems. Currently, this problem is not formally solved. Therefore, we have proposed a solution which takes sum of weights of feedback arcs as a criteria for determining development order of Information System subsystems. Furthermore, we have proved that the problem of Information System Subsystems Development Order is NP-complete, NP-hard, and APX-hard. Also, in order to solve this problem we have created: Branch and Bound algorithm, Monte Carlo randomized algorithm, and heuristic algorithm. Complexity has been calculated for all three algorithms. All three algorithms have been implemented and empirically analysed. Lastly, we have showed how one can apply additional constraints upon the problem of Information System Subsystems Development Order, and where can one potentially use developed algorithms outside of Information System Subsystems Development Order problem
    corecore