22 research outputs found

    Steinitz Theorems for Orthogonal Polyhedra

    Full text link
    We define a simple orthogonal polyhedron to be a three-dimensional polyhedron with the topology of a sphere in which three mutually-perpendicular edges meet at each vertex. By analogy to Steinitz's theorem characterizing the graphs of convex polyhedra, we find graph-theoretic characterizations of three classes of simple orthogonal polyhedra: corner polyhedra, which can be drawn by isometric projection in the plane with only one hidden vertex, xyz polyhedra, in which each axis-parallel line through a vertex contains exactly one other vertex, and arbitrary simple orthogonal polyhedra. In particular, the graphs of xyz polyhedra are exactly the bipartite cubic polyhedral graphs, and every bipartite cubic polyhedral graph with a 4-connected dual graph is the graph of a corner polyhedron. Based on our characterizations we find efficient algorithms for constructing orthogonal polyhedra from their graphs.Comment: 48 pages, 31 figure

    Improved Bounds for Shortest Paths in Dense Distance Graphs

    Get PDF
    We study the problem of computing shortest paths in so-called dense distance graphs, a basic building block for designing efficient planar graph algorithms. Let G be a plane graph with a distinguished set partial{G} of boundary vertices lying on a constant number of faces of G. A distance clique of G is a complete graph on partial{G} encoding all-pairs distances between these vertices. A dense distance graph is a union of possibly many unrelated distance cliques. Fakcharoenphol and Rao [Fakcharoenphol and Rao, 2006] proposed an efficient implementation of Dijkstra\u27s algorithm (later called FR-Dijkstra) computing single-source shortest paths in a dense distance graph. Their algorithm spends O(b log^2{n}) time per distance clique with b vertices, even though a clique has b^2 edges. Here, n is the total number of vertices of the dense distance graph. The invention of FR-Dijkstra was instrumental in obtaining such results for planar graphs as nearly-linear time algorithms for multiple-source-multiple-sink maximum flow and dynamic distance oracles with sublinear update and query bounds. At the heart of FR-Dijkstra lies a data structure updating distance labels and extracting minimum labeled vertices in O(log^2{n}) amortized time per vertex. We show an improved data structure with O((log^2{n})/(log^2 log n)) amortized bounds. This is the first improvement over the data structure of Fakcharoenphol and Rao in more than 15 years. It yields improved bounds for all problems on planar graphs, for which computing shortest paths in dense distance graphs is currently a bottleneck

    Struktury danych i algorytmy dynamiczne dla grafów planarnych

    Get PDF
    Obtaining provably efficient algorithms for the most basic graph problems like finding (shortest) paths or computing maximum matchings, fast enough to handle real-world-scale graphs (i.e., consisting of millions of vertices and edges), is a very challenging task. For example, in a very general regime of strongly-polynomial algorithms (see, e.g., [65]), we still do not know how to compute shortest paths in a real-weighted sparse directed graph significantly faster than in quadratic time, using the classical, but somewhat simple-minded, Bellman-Ford method. One way to circumvent this problem is to consider more restricted computation models for graph algorithms. If, for example, we restrict ourselves to graphs with integral edge weights, we can improve upon the Bellman-Ford algorithm [14, 31]. Although these results are very deep algorithmically, their theoretical efficiency is still very far from the only known trivial linear lower bound on the actual time complexity of the negatively-weighted shortest path problem. Another approach is to develop algorithms specialized for certain graph classes that appear in practice. Planar graphs constitute one of the most important and well-studied such classes. Many of the real-world networks can be drawn on a plane with no or few edge crossings. The examples include not very complex road networks and graphs considered in the domain of VLSI design. Complex road networks, although far from being planar, share with planar graphs some useful properties, like the existence of small separators [20]. Special cases of planar graphs, such as grids, appear often in the area of image processing (e.g., [7]). And indeed, if we restrict ourselves to planar graphs, many of the classical polynomial-time graph problems, in particular computing shortest paths [35, 58] and maximum flows [4, 5, 21] in real-weighted graphs, can be solved either optimally or in nearly-linear time. The very rich combinatorial structure of planar graphs often allows breaking barriers that appear in the respective problems for general graphs by using techniques from computational geometry (e.g., [27]), or by applying sophisticated data structures, such as dynamic trees [4, 10, 21, 66]. In this thesis, we focus on the data-structural aspect of planar graph algorithmics. By this, we mean that rather than concentrating on particular planar graph problems, we study more abstract, “low-level” problems. Efficient algorithms for these problems can be used in a blackbox manner to design algorithms for multiple specific problems at once. Such an approach allows us to improve upon many known complexity upper bounds for different planar graph problems simultaneously, without going into the specifics of these problems. We also study dynamic algorithms for planar graphs, i.e., algorithms that maintain certain information about a dynamically changing graph (such as “is the graph connected?”) much more efficiently than by recomputing this information from scratch after each update. We consider the edge-update model where the input graph can be modified only by adding or removing 1 single edges. A graph algorithm is called fully-dynamic if it supports both edge insertions and edge deletions, and partially dynamic if it supports either only edge insertions (then we call it incremental) or only edge deletions (then it is called decremental). When designing dynamic graph algorithms, we care about the update time, i.e., the time needed by the algorithm to adapt to an elementary change of the graph, and query time, i.e., the time needed by the algorithm to recompute the requested portion of the maintained information. Sometimes, especially in partially dynamic settings, it is more convenient to measure the total update time, i.e., the total time needed by the algorithm to process any possible sequence of updates. For some dynamic problems, it is worth focusing on a more restricted explicit maintenance model where the entire maintained information is explicitly updated (so that the user is notified about the update) after each change. In this model the query procedure is trivial and thus we only care about the update time. Note that there is actually no clear distinction between dynamic graph algorithms and graph data structures, since dynamic algorithms are often used as black-boxes to obtain efficient static algorithms (e.g., [26]). For example, the incremental connectivity problem, where one needs to process queries about the existence of a path between given vertices, while the input undirected graph undergoes edge insertions, is actually equivalent to the disjoint-set data structure problem, also called the union-find data structure problem (see, e.g., [15]). We concentrate mostly on the decremental model and obtain very efficient decremental algorithms for problems on unweighted planar graphs related to reachability and connectivity. We also apply our dynamic algorithms to static problems, thus confirming once again the datastructural character of these results. In the following, let G = (V, E) denote the input planar graph with n vertices. For clarity of this summary, assume G is a simple graph. Then, by planarity, it has O(n) edges. When we talk about general graphs, we denote by m the number of edges of the graph. 2 Contracting a Planar Graph The first part of the thesis is devoted to the data-structural aspect of contracting edges in planar graphs. Edge contraction is one of the fundamental graph operations. Given an undirected graph and its edge e, contracting the edge e consists in removing it from the graph and merging its endpoints. The notion of contraction has been used to describe a number of prominent graph algorithms, including Edmonds’ algorithm for computing maximum matchings [19], or Karger’s minimum cut algorithm [44]. Edge contractions are of particular interest in planar graphs, as a number of planar graph properties can be described using contractions. For example, it is well-known that a graph is planar precisely when it cannot be transformed into K5 or K3,3 by contracting edges, or removing vertices or edges (see e.g., [17]). Moreover, contracting an edge preserves planarity. We would like to have at our disposal a data structure that performs contractions on the input planar graph and still provides access to the most basic information about our graph, such as the sizes of neighbors sets of individual vertices and the adjacency relation. While contraction operation is conceptually very simple, its efficient implementation is challenging. This is because it is not clear how to represent individual vertices’ adjacency lists so that adjacency list merges, adjacency queries, and neighborhood size queries are all efficient. By using standard data structures (e.g., balanced binary search trees), one can maintain adjacency lists of a graph subject to contractions in polylogarithmic amortized time. However, in many planar graph algorithms this becomes a bottleneck. As an example, consider the problem of computing a 5-coloring of a planar graph. There exists a very simple algorithm based on contractions [53] that only relies on a folklore fact that 2 a planar graph has a vertex of degree no more than 5. However, linear-time algorithms solving this problem use some more involved planar graph properties [23, 53, 60]. For example, the algorithm by Matula et al. [53] uses the fact that every planar graph has either a vertex of degree at most 4 or a vertex of degree 5 adjacent to at least four vertices, each having degree at most 11. Similarly, although there exists a very simple algorithm for computing a minimum spanning tree of a planar graph based on edge contractions, various different methods have been used to implement it efficiently [23, 51, 52]. The problem of maintaining a planar graph under contractions has been studied before. In their book, Klein and Mozes [46] showed that there exists a (a bit more general) data structure maintaining a planar graph under edge contractions and deletions, and answering adjacency queries in O(1) worst-case time. The update time is O(log n). This result is based on the work of Brodal and Fagerberg [8], who showed how to maintain a bounded-outdegree orientation of a dynamic planar graph so that the edge set updates are supported in O(log n) amortized time. Gustedt [32] showed an optimal solution to the union-find problem in the case when at any time the actual subsets form disjoint and connected subgraphs of a given planar graph G. In other words, in this problem the allowed unions correspond to the edges of a planar graph and the execution of a union operation can be seen as a contraction of the respective edge. Our Results We show a data structure that can efficiently maintain a planar graph subject to edge contractions in linear total time, assuming the standard word-RAM model with word size Ω(log n). It can report groups of parallel edges and self-loops that emerge. It also supports constant-time adjacency queries and maintains the neighbor lists and degrees explicitly. The data structure can be used as a black-box to implement planar graph algorithms that use contractions. As an example, our data structure can be used to give clean and conceptually simple lineartime implementations of algorithms for computing 5-coloring or minimum spanning tree. More importantly, by using our data structure, we give improved algorithms for a few problems in planar graphs. In particular, we obtain optimal algorithms for decremental 2-edgeconnectivity (see, e.g., [30]), finding a unique perfect matching [26], and computing maximal 3-edge-connected subgraphs [12]. In order to obtain our result, we first partition the graph into small pieces of roughly logarithmic size (using so-called r-divisions [24]). Then we solve our problem recursively for each of the pieces, and separately using a simple-minded approach for the subgraph induced by o(n) vertices contained in multiple pieces (the so-called boundary vertices). Such an approach proved successful in obtaining optimal data structures for the planar union-find problem [32] and decremental connectivity [50]. In fact, our data-structural problem can be seen as a generalization of the former problem. However, maintaining the status of each edge e of the initial graph G (i.e., whether e has become a self-loop or a parallel edge) subject to edge contractions, and supporting constant-time adjacency queries without resorting to randomization, turn out to be serious technical challenges. Overcoming these difficulties is our main contribution of this part of the thesis. 3 Decremental Reachability The second part of this thesis is devoted to dynamic reachability problems in planar graphs. In the dynamic reachability problem we are given a (directed) graph G subject to edge updates and the goal is to design a data structure that would allow answering queries about the existence of a path between a pair of query vertices u, v ∈ V . 3 Two variants of dynamic reachability are studied most often. In the all-pairs variant, our data structure has to support queries between arbitrary pairs of vertices. This variant is also called the dynamic transitive closure problem, since a path u → v exists in G if uv is an edge of the transitive closure of G. In the single-source reachability problem, a source vertex s ∈ V is fixed from the very beginning and the only allowed queries are about the existence of a path s → v, where v ∈ V . If we work with undirected graphs, the dynamic reachability problem is called the dynamic connectivity problem. Note that in the undirected case a path u → v exists in G if and only if a path v → u exists in G. State of the Art Dynamic reachability in general directed graphs turns out to be a very challenging problem. First of all, it is computationally much more demanding than its undirected counterpart. For undirected graphs, fully-dynamic all-pairs algorithms with polylogarithmic amortized update and query bounds are known [36, 38, 71]. For directed graphs, on the other hand, in most settings (either single-source or all-pairs, either incremental, decremental or fully-dynamic) the best known algorithm has either polynomial update time or polynomial query time. The only exception is the incremental single-source reachability problem, for which a trivial extension of depth-first search [68] achieves O(1) amortized update time. One of the possible reasons behind such a big gap between the undirected and directed settings is that one needs only linear time to compute the connected components of an undirected graph, and thus there exists a O(n)-space static data structure that can answer connectivity queries in undirected graphs in O(1) time. On the other hand, the best known algorithm for computing the transitive closure runs in Oe(min(n ω , nm)) = Oe(n 2 ) 1 time [11, 59]. So far, the best known bounds for fully-dynamic reachability are as follows. For dynamic transitive closure, there exist a number of algorithms with O(n 2 ) update time and O(1) query time [16, 61, 64]. These algorithms, in fact, maintain the transitive closure explicitly. There also exist a few fully-dynamic algorithms that are better for sparse graphs, each of which has Ω(n) amortized update time and query time which is o(n) but still polynomial in n [62, 63, 64]. For the single-source variant, the only known non-trivial (i.e., other than recompute-from-scratch) algorithm has O(n 1.53) update time and O(1) query time [64]. Algorithms with O(nm) total update time are known for both incremental [39] and decremental [48, 62] transitive closure. Note that for sparse graphs this bound is only poly-logarithmic factors away from the best known static transitive closure upper bound [11]. All the known partially-dynamic single-source reachability algorithms work in the explicit maintenance model. As mentioned before, for incremental single-source reachability, an optimal (in the amortized sense) algorithm is known. Interestingly, the first algorithms with O(mn1− ) total update time (where > 0) have been obtained only recently [33, 34]. The best known algorithm to date has Oe(m √ n) total update time and is due to Chechik et al. [13]. Dynamic reachability has also been previously studied for planar graphs. Diks and Sankowski [18] showed a fully-dynamic transitive closure algorithm with Oe( √ n) update and query times, which works under the assumption that the graph is plane embedded and the inserted edges can only connect vertices sharing some adjacent face. Łącki [48] showed that one can maintain the strongly connected components of a planar graph under edge deletions in O(n √ n) total time. By known reductions, it follows that there exists a decremental single-source reachability algorithm for planar graphs with O(n √ n) total update time. Note that this bound matches the recent best known bound for general graphs [13] up to polylogarithmic factors. 1We denote by Oe(f(n)) the order O(f(n) polylog n)

    Max s,ts,t-Flow Oracles and Negative Cycle Detection in Planar Digraphs

    Full text link
    We study the maximum s,ts,t-flow oracle problem on planar directed graphs where the goal is to design a data structure answering max s,ts,t-flow value (or equivalently, min s,ts,t-cut value) queries for arbitrary source-target pairs (s,t)(s,t). For the case of polynomially bounded integer edge capacities, we describe an exact max s,ts,t-flow oracle with truly subquadratic space and preprocessing, and sublinear query time. Moreover, if (1ϵ)(1-\epsilon)-approximate answers are acceptable, we obtain a static oracle with near-linear preprocessing and O~(n3/4)\tilde{O}(n^{3/4}) query time and a dynamic oracle supporting edge capacity updates and queries in O~(n6/7)\tilde{O}(n^{6/7}) worst-case time. To the best of our knowledge, for directed planar graphs, no (approximate) max s,ts,t-flow oracles have been described even in the unweighted case, and only trivial tradeoffs involving either no preprocessing or precomputing all the n2n^2 possible answers have been known. One key technical tool we develop on the way is a sublinear (in the number of edges) algorithm for finding a negative cycle in so-called dense distance graphs. By plugging it in earlier frameworks, we obtain improved bounds for other fundamental problems on planar digraphs. In particular, we show: (1) a deterministic O(nlog(nC))O(n\log(nC)) time algorithm for negatively-weighted SSSP in planar digraphs with integer edge weights at least C-C. This improves upon the previously known bounds in the important case of weights polynomial in nn, and (2) an improved O(nlogn)O(n\log{n}) bound on finding a perfect matching in a bipartite planar graph.Comment: Extended abstract to appear in SODA 202

    Fine-Grained Complexity: Exploring Reductions and their Properties

    Get PDF
    Η σχεδίαση αλγορίθμων αποτελεί ένα απο τα κύρια θέματα ενδιαφέροντος για τον τομέα της Πληροφορικής. Παρά τα πολλά αποτελέσματα σε ορισμένους τομείς, η προσέγγιση αυτή έχει πετύχει κάποια πρακτικά αδιέξοδα που έχουν αποδειχτεί προβληματικά στην πρόοδο του τομέα. Επίσης, οι κλασικές πρακτικές Υπολογιστικής Πολυπλοκότητας δεν ήταν σε θέση να παρακάμψουν αυτά τα εμπόδια. Η κατανόηση της δυσκολίας του κάθε προβλήματος δεν είναι τετριμμένη. Η Ραφιναρισμένη Πολυπλοκότητα παρέχει νέες προ-οπτικές για τα κλασικά προβλήματα, με αποτέλεσμα σταθερούς δεσμούς μεταξύ γνωστών εικασιών στην πολυπλοκότητα και την σχεδίαση αλγορίθμων. Χρησιμεύει επίσης ως εργα-λείο για να αποδείξει τα υπο όρους κατώτατα όρια για προβλήματα πολυωνυμικής χρονικής πολυπλοκότητας, ένα πεδίο που έχει σημειώσει πολύ λίγη πρόοδο μέχρι τώρα. Οι δημοφι-λείς υποθέσεις/παραδοχές όπως το SETH, το OVH, το 3SUM, και το APSP, δίνουν πολλά φράγματα που δεν έχουν ακόμα αποδειχθεί με κλασικές τεχνικές και παρέχουν μια νέα κατανόηση της δομής και της εντροπίας των προβλημάτων γενικά. Σκοπός αυτής της εργασίας είναι να συμβάλει στην εδραίωση του πλαισίου για αναγωγές από κάθε εικασία και να διερευνήσει την διαρθρωτική διαφορά μεταξύ των προβλημάτων σε κάθε περίπτωση.Algorithmic design has been one of the main subjects of interest for Computer science. While very effective in some areas, this approach has been met with some practical dead ends that have been very problematic in the progress of the field. Classical Computational Complexity practices have also not been able to bypass these blocks. Understanding the hardness of each problem is not trivial. Fine-Grained Complexity provides new perspectives on classic problems, resulting to solid links between famous conjectures in Complexity, and Algorithmic design. It serves as a tool to prove conditional lower bounds for problems with polynomial time complexity, a field that had seen very little progress until now. Popular conjectures such as SETH, k-OV, 3SUM, and APSP, imply many bounds that have yet to be proven using classic techniques, and provide a new understanding of the structure and entropy of problems in general. The aim of this thesis is to contribute towards solidifying the framework for reductions from each conjecture, and to explore the structural difference between the problems in each cas

    Fully Dynamic Effective Resistances

    Full text link
    In this paper we consider the \emph{fully-dynamic} All-Pairs Effective Resistance problem, where the goal is to maintain effective resistances on a graph GG among any pair of query vertices under an intermixed sequence of edge insertions and deletions in GG. The effective resistance between a pair of vertices is a physics-motivated quantity that encapsulates both the congestion and the dilation of a flow. It is directly related to random walks, and it has been instrumental in the recent works for designing fast algorithms for combinatorial optimization problems, graph sparsification, and network science. We give a data-structure that maintains (1+ϵ)(1+\epsilon)-approximations to all-pair effective resistances of a fully-dynamic unweighted, undirected multi-graph GG with O~(m4/5ϵ4)\tilde{O}(m^{4/5}\epsilon^{-4}) expected amortized update and query time, against an oblivious adversary. Key to our result is the maintenance of a dynamic \emph{Schur complement}~(also known as vertex resistance sparsifier) onto a set of terminal vertices of our choice. This maintenance is obtained (1) by interpreting the Schur complement as a sum of random walks and (2) by randomly picking the vertex subset into which the sparsifier is constructed. We can then show that each update in the graph affects a small number of such walks, which in turn leads to our sub-linear update time. We believe that this local representation of vertex sparsifiers may be of independent interest

    Combinatorial Optimization

    Get PDF
    Combinatorial Optimization is a very active field that benefits from bringing together ideas from different areas, e.g., graph theory and combinatorics, matroids and submodularity, connectivity and network flows, approximation algorithms and mathematical programming, discrete and computational geometry, discrete and continuous problems, algebraic and geometric methods, and applications. We continued the long tradition of triannual Oberwolfach workshops, bringing together the best researchers from the above areas, discovering new connections, and establishing new and deepening existing international collaborations
    corecore