8,592 research outputs found

    Scalable Routing Easy as PIE: a Practical Isometric Embedding Protocol (Technical Report)

    Get PDF
    We present PIE, a scalable routing scheme that achieves 100% packet delivery and low path stretch. It is easy to implement in a distributed fashion and works well when costs are associated to links. Scalability is achieved by using virtual coordinates in a space of concise dimensionality, which enables greedy routing based only on local knowledge. PIE is a general routing scheme, meaning that it works on any graph. We focus however on the Internet, where routing scalability is an urgent concern. We show analytically and by using simulation that the scheme scales extremely well on Internet-like graphs. In addition, its geometric nature allows it to react efficiently to topological changes or failures by finding new paths in the network at no cost, yielding better delivery ratios than standard algorithms. The proposed routing scheme needs an amount of memory polylogarithmic in the size of the network and requires only local communication between the nodes. Although each node constructs its coordinates and routes packets locally, the path stretch remains extremely low, even lower than for centralized or less scalable state-of-the-art algorithms: PIE always finds short paths and often enough finds the shortest paths.Comment: This work has been previously published in IEEE ICNP'11. The present document contains an additional optional mechanism, presented in Section III-D, to further improve performance by using route asymmetry. It also contains new simulation result

    Fault-Tolerant Spanners: Better and Simpler

    Full text link
    A natural requirement of many distributed structures is fault-tolerance: after some failures, whatever remains from the structure should still be effective for whatever remains from the network. In this paper we examine spanners of general graphs that are tolerant to vertex failures, and significantly improve their dependence on the number of faults rr, for all stretch bounds. For stretch kā‰„3k \geq 3 we design a simple transformation that converts every kk-spanner construction with at most f(n)f(n) edges into an rr-fault-tolerant kk-spanner construction with at most O(r3logā”n)ā‹…f(2n/r)O(r^3 \log n) \cdot f(2n/r) edges. Applying this to standard greedy spanner constructions gives rr-fault tolerant kk-spanners with O~(r2n1+2k+1)\tilde O(r^{2} n^{1+\frac{2}{k+1}}) edges. The previous construction by Chechik, Langberg, Peleg, and Roddity [STOC 2009] depends similarly on nn but exponentially on rr (approximately like krk^r). For the case k=2k=2 and unit-length edges, an O(rlogā”n)O(r \log n)-approximation algorithm is known from recent work of Dinitz and Krauthgamer [arXiv 2010], where several spanner results are obtained using a common approach of rounding a natural flow-based linear programming relaxation. Here we use a different (stronger) LP relaxation and improve the approximation ratio to O(logā”n)O(\log n), which is, notably, independent of the number of faults rr. We further strengthen this bound in terms of the maximum degree by using the \Lovasz Local Lemma. Finally, we show that most of our constructions are inherently local by designing equivalent distributed algorithms in the LOCAL model of distributed computation.Comment: 17 page

    Low Diameter Graph Decompositions by Approximate Distance Computation

    Get PDF
    In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is "used" only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded

    Traveling in randomly embedded random graphs

    Full text link
    We consider the problem of traveling among random points in Euclidean space, when only a random fraction of the pairs are joined by traversable connections. In particular, we show a threshold for a pair of points to be connected by a geodesic of length arbitrarily close to their Euclidean distance, and analyze the minimum length Traveling Salesperson Tour, extending the Beardwood-Halton-Hammersley theorem to this setting.Comment: 25 pages, 2 figure

    Vertex Sparsifiers: New Results from Old Techniques

    Get PDF
    Given a capacitated graph G=(V,E)G = (V,E) and a set of terminals KāŠ†VK \subseteq V, how should we produce a graph HH only on the terminals KK so that every (multicommodity) flow between the terminals in GG could be supported in HH with low congestion, and vice versa? (Such a graph HH is called a flow-sparsifier for GG.) What if we want HH to be a "simple" graph? What if we allow HH to be a convex combination of simple graphs? Improving on results of Moitra [FOCS 2009] and Leighton and Moitra [STOC 2010], we give efficient algorithms for constructing: (a) a flow-sparsifier HH that maintains congestion up to a factor of O(logā”k/logā”logā”k)O(\log k/\log \log k), where k=āˆ£Kāˆ£k = |K|, (b) a convex combination of trees over the terminals KK that maintains congestion up to a factor of O(logā”k)O(\log k), and (c) for a planar graph GG, a convex combination of planar graphs that maintains congestion up to a constant factor. This requires us to give a new algorithm for the 0-extension problem, the first one in which the preimages of each terminal are connected in GG. Moreover, this result extends to minor-closed families of graphs. Our improved bounds immediately imply improved approximation guarantees for several terminal-based cut and ordering problems.Comment: An extended abstract appears in the 13th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2010. Final version to appear in SIAM J. Computin

    On Strong Diameter Padded Decompositions

    Get PDF
    Given a weighted graph G=(V,E,w), a partition of V is Delta-bounded if the diameter of each cluster is bounded by Delta. A distribution over Delta-bounded partitions is a beta-padded decomposition if every ball of radius gamma Delta is contained in a single cluster with probability at least e^{-beta * gamma}. The weak diameter of a cluster C is measured w.r.t. distances in G, while the strong diameter is measured w.r.t. distances in the induced graph G[C]. The decomposition is weak/strong according to the diameter guarantee. Formerly, it was proven that K_r free graphs admit weak decompositions with padding parameter O(r), while for strong decompositions only O(r^2) padding parameter was known. Furthermore, for the case of a graph G, for which the induced shortest path metric d_G has doubling dimension ddim, a weak O(ddim)-padded decomposition was constructed, which is also known to be tight. For the case of strong diameter, nothing was known. We construct strong O(r)-padded decompositions for K_r free graphs, matching the state of the art for weak decompositions. Similarly, for graphs with doubling dimension ddim we construct a strong O(ddim)-padded decomposition, which is also tight. We use this decomposition to construct (O(ddim),O~(ddim))-sparse cover scheme for such graphs. Our new decompositions and cover have implications to approximating unique games, the construction of light and sparse spanners, and for path reporting distance oracles
    • ā€¦
    corecore