202 research outputs found

    Reliable Hubs for Partially-Dynamic All-Pairs Shortest Paths in Directed Graphs

    Get PDF
    We give new partially-dynamic algorithms for the all-pairs shortest paths problem in weighted directed graphs. Most importantly, we give a new deterministic incremental algorithm for the problem that handles updates in O~(mn^(4/3) log{W}/epsilon) total time (where the edge weights are from [1,W]) and explicitly maintains a (1+epsilon)-approximate distance matrix. For a fixed epsilon>0, this is the first deterministic partially dynamic algorithm for all-pairs shortest paths in directed graphs, whose update time is o(n^2) regardless of the number of edges. Furthermore, we also show how to improve the state-of-the-art partially dynamic randomized algorithms for all-pairs shortest paths [Baswana et al. STOC\u2702, Bernstein STOC\u2713] from Monte Carlo randomized to Las Vegas randomized without increasing the running time bounds (with respect to the O~(*) notation). Our results are obtained by giving new algorithms for the problem of dynamically maintaining hubs, that is a set of O~(n/d) vertices which hit a shortest path between each pair of vertices, provided it has hop-length Omega(d). We give new subquadratic deterministic and Las Vegas algorithms for maintenance of hubs under either edge insertions or deletions

    Optimal Decremental Connectivity in Non-Sparse Graphs

    Get PDF

    Struktury danych i algorytmy dynamiczne dla grafów planarnych

    Get PDF
    Obtaining provably efficient algorithms for the most basic graph problems like finding (shortest) paths or computing maximum matchings, fast enough to handle real-world-scale graphs (i.e., consisting of millions of vertices and edges), is a very challenging task. For example, in a very general regime of strongly-polynomial algorithms (see, e.g., [65]), we still do not know how to compute shortest paths in a real-weighted sparse directed graph significantly faster than in quadratic time, using the classical, but somewhat simple-minded, Bellman-Ford method. One way to circumvent this problem is to consider more restricted computation models for graph algorithms. If, for example, we restrict ourselves to graphs with integral edge weights, we can improve upon the Bellman-Ford algorithm [14, 31]. Although these results are very deep algorithmically, their theoretical efficiency is still very far from the only known trivial linear lower bound on the actual time complexity of the negatively-weighted shortest path problem. Another approach is to develop algorithms specialized for certain graph classes that appear in practice. Planar graphs constitute one of the most important and well-studied such classes. Many of the real-world networks can be drawn on a plane with no or few edge crossings. The examples include not very complex road networks and graphs considered in the domain of VLSI design. Complex road networks, although far from being planar, share with planar graphs some useful properties, like the existence of small separators [20]. Special cases of planar graphs, such as grids, appear often in the area of image processing (e.g., [7]). And indeed, if we restrict ourselves to planar graphs, many of the classical polynomial-time graph problems, in particular computing shortest paths [35, 58] and maximum flows [4, 5, 21] in real-weighted graphs, can be solved either optimally or in nearly-linear time. The very rich combinatorial structure of planar graphs often allows breaking barriers that appear in the respective problems for general graphs by using techniques from computational geometry (e.g., [27]), or by applying sophisticated data structures, such as dynamic trees [4, 10, 21, 66]. In this thesis, we focus on the data-structural aspect of planar graph algorithmics. By this, we mean that rather than concentrating on particular planar graph problems, we study more abstract, “low-level” problems. Efficient algorithms for these problems can be used in a blackbox manner to design algorithms for multiple specific problems at once. Such an approach allows us to improve upon many known complexity upper bounds for different planar graph problems simultaneously, without going into the specifics of these problems. We also study dynamic algorithms for planar graphs, i.e., algorithms that maintain certain information about a dynamically changing graph (such as “is the graph connected?”) much more efficiently than by recomputing this information from scratch after each update. We consider the edge-update model where the input graph can be modified only by adding or removing 1 single edges. A graph algorithm is called fully-dynamic if it supports both edge insertions and edge deletions, and partially dynamic if it supports either only edge insertions (then we call it incremental) or only edge deletions (then it is called decremental). When designing dynamic graph algorithms, we care about the update time, i.e., the time needed by the algorithm to adapt to an elementary change of the graph, and query time, i.e., the time needed by the algorithm to recompute the requested portion of the maintained information. Sometimes, especially in partially dynamic settings, it is more convenient to measure the total update time, i.e., the total time needed by the algorithm to process any possible sequence of updates. For some dynamic problems, it is worth focusing on a more restricted explicit maintenance model where the entire maintained information is explicitly updated (so that the user is notified about the update) after each change. In this model the query procedure is trivial and thus we only care about the update time. Note that there is actually no clear distinction between dynamic graph algorithms and graph data structures, since dynamic algorithms are often used as black-boxes to obtain efficient static algorithms (e.g., [26]). For example, the incremental connectivity problem, where one needs to process queries about the existence of a path between given vertices, while the input undirected graph undergoes edge insertions, is actually equivalent to the disjoint-set data structure problem, also called the union-find data structure problem (see, e.g., [15]). We concentrate mostly on the decremental model and obtain very efficient decremental algorithms for problems on unweighted planar graphs related to reachability and connectivity. We also apply our dynamic algorithms to static problems, thus confirming once again the datastructural character of these results. In the following, let G = (V, E) denote the input planar graph with n vertices. For clarity of this summary, assume G is a simple graph. Then, by planarity, it has O(n) edges. When we talk about general graphs, we denote by m the number of edges of the graph. 2 Contracting a Planar Graph The first part of the thesis is devoted to the data-structural aspect of contracting edges in planar graphs. Edge contraction is one of the fundamental graph operations. Given an undirected graph and its edge e, contracting the edge e consists in removing it from the graph and merging its endpoints. The notion of contraction has been used to describe a number of prominent graph algorithms, including Edmonds’ algorithm for computing maximum matchings [19], or Karger’s minimum cut algorithm [44]. Edge contractions are of particular interest in planar graphs, as a number of planar graph properties can be described using contractions. For example, it is well-known that a graph is planar precisely when it cannot be transformed into K5 or K3,3 by contracting edges, or removing vertices or edges (see e.g., [17]). Moreover, contracting an edge preserves planarity. We would like to have at our disposal a data structure that performs contractions on the input planar graph and still provides access to the most basic information about our graph, such as the sizes of neighbors sets of individual vertices and the adjacency relation. While contraction operation is conceptually very simple, its efficient implementation is challenging. This is because it is not clear how to represent individual vertices’ adjacency lists so that adjacency list merges, adjacency queries, and neighborhood size queries are all efficient. By using standard data structures (e.g., balanced binary search trees), one can maintain adjacency lists of a graph subject to contractions in polylogarithmic amortized time. However, in many planar graph algorithms this becomes a bottleneck. As an example, consider the problem of computing a 5-coloring of a planar graph. There exists a very simple algorithm based on contractions [53] that only relies on a folklore fact that 2 a planar graph has a vertex of degree no more than 5. However, linear-time algorithms solving this problem use some more involved planar graph properties [23, 53, 60]. For example, the algorithm by Matula et al. [53] uses the fact that every planar graph has either a vertex of degree at most 4 or a vertex of degree 5 adjacent to at least four vertices, each having degree at most 11. Similarly, although there exists a very simple algorithm for computing a minimum spanning tree of a planar graph based on edge contractions, various different methods have been used to implement it efficiently [23, 51, 52]. The problem of maintaining a planar graph under contractions has been studied before. In their book, Klein and Mozes [46] showed that there exists a (a bit more general) data structure maintaining a planar graph under edge contractions and deletions, and answering adjacency queries in O(1) worst-case time. The update time is O(log n). This result is based on the work of Brodal and Fagerberg [8], who showed how to maintain a bounded-outdegree orientation of a dynamic planar graph so that the edge set updates are supported in O(log n) amortized time. Gustedt [32] showed an optimal solution to the union-find problem in the case when at any time the actual subsets form disjoint and connected subgraphs of a given planar graph G. In other words, in this problem the allowed unions correspond to the edges of a planar graph and the execution of a union operation can be seen as a contraction of the respective edge. Our Results We show a data structure that can efficiently maintain a planar graph subject to edge contractions in linear total time, assuming the standard word-RAM model with word size Ω(log n). It can report groups of parallel edges and self-loops that emerge. It also supports constant-time adjacency queries and maintains the neighbor lists and degrees explicitly. The data structure can be used as a black-box to implement planar graph algorithms that use contractions. As an example, our data structure can be used to give clean and conceptually simple lineartime implementations of algorithms for computing 5-coloring or minimum spanning tree. More importantly, by using our data structure, we give improved algorithms for a few problems in planar graphs. In particular, we obtain optimal algorithms for decremental 2-edgeconnectivity (see, e.g., [30]), finding a unique perfect matching [26], and computing maximal 3-edge-connected subgraphs [12]. In order to obtain our result, we first partition the graph into small pieces of roughly logarithmic size (using so-called r-divisions [24]). Then we solve our problem recursively for each of the pieces, and separately using a simple-minded approach for the subgraph induced by o(n) vertices contained in multiple pieces (the so-called boundary vertices). Such an approach proved successful in obtaining optimal data structures for the planar union-find problem [32] and decremental connectivity [50]. In fact, our data-structural problem can be seen as a generalization of the former problem. However, maintaining the status of each edge e of the initial graph G (i.e., whether e has become a self-loop or a parallel edge) subject to edge contractions, and supporting constant-time adjacency queries without resorting to randomization, turn out to be serious technical challenges. Overcoming these difficulties is our main contribution of this part of the thesis. 3 Decremental Reachability The second part of this thesis is devoted to dynamic reachability problems in planar graphs. In the dynamic reachability problem we are given a (directed) graph G subject to edge updates and the goal is to design a data structure that would allow answering queries about the existence of a path between a pair of query vertices u, v ∈ V . 3 Two variants of dynamic reachability are studied most often. In the all-pairs variant, our data structure has to support queries between arbitrary pairs of vertices. This variant is also called the dynamic transitive closure problem, since a path u → v exists in G if uv is an edge of the transitive closure of G. In the single-source reachability problem, a source vertex s ∈ V is fixed from the very beginning and the only allowed queries are about the existence of a path s → v, where v ∈ V . If we work with undirected graphs, the dynamic reachability problem is called the dynamic connectivity problem. Note that in the undirected case a path u → v exists in G if and only if a path v → u exists in G. State of the Art Dynamic reachability in general directed graphs turns out to be a very challenging problem. First of all, it is computationally much more demanding than its undirected counterpart. For undirected graphs, fully-dynamic all-pairs algorithms with polylogarithmic amortized update and query bounds are known [36, 38, 71]. For directed graphs, on the other hand, in most settings (either single-source or all-pairs, either incremental, decremental or fully-dynamic) the best known algorithm has either polynomial update time or polynomial query time. The only exception is the incremental single-source reachability problem, for which a trivial extension of depth-first search [68] achieves O(1) amortized update time. One of the possible reasons behind such a big gap between the undirected and directed settings is that one needs only linear time to compute the connected components of an undirected graph, and thus there exists a O(n)-space static data structure that can answer connectivity queries in undirected graphs in O(1) time. On the other hand, the best known algorithm for computing the transitive closure runs in Oe(min(n ω , nm)) = Oe(n 2 ) 1 time [11, 59]. So far, the best known bounds for fully-dynamic reachability are as follows. For dynamic transitive closure, there exist a number of algorithms with O(n 2 ) update time and O(1) query time [16, 61, 64]. These algorithms, in fact, maintain the transitive closure explicitly. There also exist a few fully-dynamic algorithms that are better for sparse graphs, each of which has Ω(n) amortized update time and query time which is o(n) but still polynomial in n [62, 63, 64]. For the single-source variant, the only known non-trivial (i.e., other than recompute-from-scratch) algorithm has O(n 1.53) update time and O(1) query time [64]. Algorithms with O(nm) total update time are known for both incremental [39] and decremental [48, 62] transitive closure. Note that for sparse graphs this bound is only poly-logarithmic factors away from the best known static transitive closure upper bound [11]. All the known partially-dynamic single-source reachability algorithms work in the explicit maintenance model. As mentioned before, for incremental single-source reachability, an optimal (in the amortized sense) algorithm is known. Interestingly, the first algorithms with O(mn1− ) total update time (where > 0) have been obtained only recently [33, 34]. The best known algorithm to date has Oe(m √ n) total update time and is due to Chechik et al. [13]. Dynamic reachability has also been previously studied for planar graphs. Diks and Sankowski [18] showed a fully-dynamic transitive closure algorithm with Oe( √ n) update and query times, which works under the assumption that the graph is plane embedded and the inserted edges can only connect vertices sharing some adjacent face. Łącki [48] showed that one can maintain the strongly connected components of a planar graph under edge deletions in O(n √ n) total time. By known reductions, it follows that there exists a decremental single-source reachability algorithm for planar graphs with O(n √ n) total update time. Note that this bound matches the recent best known bound for general graphs [13] up to polylogarithmic factors. 1We denote by Oe(f(n)) the order O(f(n) polylog n)

    On-Line File Caching

    Full text link
    In the on-line file-caching problem problem, the input is a sequence of requests for files, given on-line (one at a time). Each file has a non-negative size and a non-negative retrieval cost. The problem is to decide which files to keep in a fixed-size cache so as to minimize the sum of the retrieval costs for files that are not in the cache when requested. The problem arises in web caching by browsers and by proxies. This paper describes a natural generalization of LRU called Landlord and gives an analysis showing that it has an optimal performance guarantee (among deterministic on-line algorithms). The paper also gives an analysis of the algorithm in a so-called ``loosely'' competitive model, showing that on a ``typical'' cache size, either the performance guarantee is O(1) or the total retrieval cost is insignificant.Comment: ACM-SIAM Symposium on Discrete Algorithms (1998

    Recent Advances in Fully Dynamic Graph Algorithms

    Full text link
    In recent years, significant advances have been made in the design and analysis of fully dynamic algorithms. However, these theoretical results have received very little attention from the practical perspective. Few of the algorithms are implemented and tested on real datasets, and their practical potential is far from understood. Here, we present a quick reference guide to recent engineering and theory results in the area of fully dynamic graph algorithms

    Near-Quadratic Lower Bounds for Two-Pass Graph Streaming Algorithms

    Full text link
    We prove that any two-pass graph streaming algorithm for the ss-tt reachability problem in nn-vertex directed graphs requires near-quadratic space of n2o(1)n^{2-o(1)} bits. As a corollary, we also obtain near-quadratic space lower bounds for several other fundamental problems including maximum bipartite matching and (approximate) shortest path in undirected graphs. Our results collectively imply that a wide range of graph problems admit essentially no non-trivial streaming algorithm even when two passes over the input is allowed. Prior to our work, such impossibility results were only known for single-pass streaming algorithms, and the best two-pass lower bounds only ruled out o(n7/6)o(n^{7/6}) space algorithms, leaving open a large gap between (trivial) upper bounds and lower bounds

    Combinatorial Auctions Do Need Modest Interaction

    Full text link
    We study the necessity of interaction for obtaining efficient allocations in subadditive combinatorial auctions. This problem was originally introduced by Dobzinski, Nisan, and Oren (STOC'14) as the following simple market scenario: mm items are to be allocated among nn bidders in a distributed setting where bidders valuations are private and hence communication is needed to obtain an efficient allocation. The communication happens in rounds: in each round, each bidder, simultaneously with others, broadcasts a message to all parties involved and the central planner computes an allocation solely based on the communicated messages. Dobzinski et.al. showed that no non-interactive (11-round) protocol with polynomial communication (in the number of items and bidders) can achieve approximation ratio better than Ω(m1/4)\Omega(m^{{1}/{4}}), while for any r1r \geq 1, there exists rr-round protocols that achieve O~(rm1/r+1)\widetilde{O}(r \cdot m^{{1}/{r+1}}) approximation with polynomial communication; in particular, O(logm)O(\log{m}) rounds of interaction suffice to obtain an (almost) efficient allocation. A natural question at this point is to identify the "right" level of interaction (i.e., number of rounds) necessary to obtain an efficient allocation. In this paper, we resolve this question by providing an almost tight round-approximation tradeoff for this problem: we show that for any r1r \geq 1, any rr-round protocol that uses polynomial communication can only approximate the social welfare up to a factor of Ω(1rm1/2r+1)\Omega(\frac{1}{r} \cdot m^{{1}/{2r+1}}). This in particular implies that Ω(logmloglogm)\Omega(\frac{\log{m}}{\log\log{m}}) rounds of interaction are necessary for obtaining any efficient allocation in these markets. Our work builds on the recent multi-party round-elimination technique of Alon, Nisan, Raz, and Weinstein (FOCS'15) and settles an open question posed by Dobzinski et.al. and Alon et. al

    On-Line Paging against Adversarially Biased Random Inputs

    Full text link
    In evaluating an algorithm, worst-case analysis can be overly pessimistic. Average-case analysis can be overly optimistic. An intermediate approach is to show that an algorithm does well on a broad class of input distributions. Koutsoupias and Papadimitriou recently analyzed the least-recently-used (LRU) paging strategy in this manner, analyzing its performance on an input sequence generated by a so-called diffuse adversary -- one that must choose each request probabilitistically so that no page is chosen with probability more than some fixed epsilon>0. They showed that LRU achieves the optimal competitive ratio (for deterministic on-line algorithms), but they didn't determine the actual ratio. In this paper we estimate the optimal ratios within roughly a factor of two for both deterministic strategies (e.g. least-recently-used and first-in-first-out) and randomized strategies. Around the threshold epsilon ~ 1/k (where k is the cache size), the optimal ratios are both Theta(ln k). Below the threshold the ratios tend rapidly to O(1). Above the threshold the ratio is unchanged for randomized strategies but tends rapidly to Theta(k) for deterministic ones. We also give an alternate proof of the optimality of LRU.Comment: Conference version appeared in SODA '98 as "Bounding the Diffuse Adversary
    corecore