41,467 research outputs found

    Small Cuts and Connectivity Certificates: A Fault Tolerant Approach

    Get PDF
    We revisit classical connectivity problems in the {CONGEST} model of distributed computing. By using techniques from fault tolerant network design, we show improved constructions, some of which are even "local" (i.e., with O~(1) rounds) for problems that are closely related to hard global problems (i.e., with a lower bound of Omega(Diam+sqrt{n}) rounds). Distributed Minimum Cut: Nanongkai and Su presented a randomized algorithm for computing a (1+epsilon)-approximation of the minimum cut using O~(D +sqrt{n}) rounds where D is the diameter of the graph. For a sufficiently large minimum cut lambda=Omega(sqrt{n}), this is tight due to Das Sarma et al. [FOCS \u2711], Ghaffari and Kuhn [DISC \u2713]. - Small Cuts: A special setting that remains open is where the graph connectivity lambda is small (i.e., constant). The only lower bound for this case is Omega(D), with a matching bound known only for lambda <= 2 due to Pritchard and Thurimella [TALG \u2711]. Recently, Daga, Henzinger, Nanongkai and Saranurak [STOC \u2719] raised the open problem of computing the minimum cut in poly(D) rounds for any lambda=O(1). In this paper, we resolve this problem by presenting a surprisingly simple algorithm, that takes a completely different approach than the existing algorithms. Our algorithm has also the benefit that it computes all minimum cuts in the graph, and naturally extends to vertex cuts as well. At the heart of the algorithm is a graph sampling approach usually used in the context of fault tolerant (FT) design. - Deterministic Algorithms: While the existing distributed minimum cut algorithms are randomized, our algorithm can be made deterministic within the same round complexity. To obtain this, we introduce a novel definition of universal sets along with their efficient computation. This allows us to derandomize the FT graph sampling technique, which might be of independent interest. - Computation of all Edge Connectivities: We also consider the more general task of computing the edge connectivity of all the edges in the graph. In the output format, it is required that the endpoints u,v of every edge (u,v) learn the cardinality of the u-v cut in the graph. We provide the first sublinear algorithm for this problem for the case of constant connectivity values. Specifically, by using the recent notion of low-congestion cycle cover, combined with the sampling technique, we compute all edge connectivities in poly(D) * 2^{O(sqrt{log n log log n})} rounds. Sparse Certificates: For an n-vertex graph G and an integer lambda, a lambda-sparse certificate H is a subgraph H subseteq G with O(lambda n) edges which is lambda-connected iff G is lambda-connected. For D-diameter graphs, constructions of sparse certificates for lambda in {2,3} have been provided by Thurimella [J. Alg. \u2797] and Dori [PODC \u2718] respectively using O~(D) number of rounds. The problem of devising such certificates with o(D+sqrt{n}) rounds was left open by Dori [PODC \u2718] for any lambda >= 4. Using connections to fault tolerant spanners, we considerably improve the round complexity for any lambda in [1,n] and epsilon in (0,1), by showing a construction of (1-epsilon)lambda-sparse certificates with O(lambda n) edges using only O(1/epsilon^2 * log^{2+o(1)} n) rounds

    Reachability and Shortest Paths in the Broadcast CONGEST Model

    Get PDF
    In this paper we study the time complexity of the single-source reachability problem and the single-source shortest path problem for directed unweighted graphs in the Broadcast CONGEST model. We focus on the case where the diameter D of the underlying network is constant. We show that for the case where D = 1 there is, quite surprisingly, a very simple algorithm that solves the reachability problem in 1(!) round. In contrast, for networks with D = 2, we show that any distributed algorithm (possibly randomized) for this problem requires Omega(sqrt{n/ log{n}}) rounds. Our results therefore completely resolve (up to a small polylog factor) the complexity of the single-source reachability problem for a wide range of diameters. Furthermore, we show that when D = 1, it is even possible to get an almost 3 - approximation for the all-pairs shortest path problem (for directed unweighted graphs) in just 2 rounds. We also prove a stronger lower bound of Omega(sqrt{n}) for the single-source shortest path problem for unweighted directed graphs that holds even when the diameter of the underlying network is 2. As far as we know this is the first lower bound that achieves Omega(sqrt{n}) for this problem

    Low-Congestion Shortcut and Graph Parameters

    Get PDF
    Distributed graph algorithms in the standard CONGEST model often exhibit the time-complexity lower bound of Omega~(sqrt{n} + D) rounds for many global problems, where n is the number of nodes and D is the diameter of the input graph. Since such a lower bound is derived from special "hard-core" instances, it does not necessarily apply to specific popular graph classes such as planar graphs. The concept of low-congestion shortcuts is initiated by Ghaffari and Haeupler [SODA2016] for addressing the design of CONGEST algorithms running fast in restricted network topologies. Specifically, given a specific graph class X, an f-round algorithm of constructing shortcuts of quality q for any instance in X results in O~(q + f)-round algorithms of solving several fundamental graph problems such as minimum spanning tree and minimum cut, for X. The main interest on this line is to identify the graph classes allowing the shortcuts which are efficient in the sense of breaking O~(sqrt{n}+D)-round general lower bounds. In this paper, we consider the relationship between the quality of low-congestion shortcuts and three major graph parameters, chordality, diameter, and clique-width. The main contribution of the paper is threefold: (1) We show an O(1)-round algorithm which constructs a low-congestion shortcut with quality O(kD) for any k-chordal graph, and prove that the quality and running time of this construction is nearly optimal up to polylogarithmic factors. (2) We present two algorithms, each of which constructs a low-congestion shortcut with quality O~(n^{1/4}) in O~(n^{1/4}) rounds for graphs of D=3, and that with quality O~(n^{1/3}) in O~(n^{1/3}) rounds for graphs of D=4 respectively. These results obviously deduce two MST algorithms running in O~(n^{1/4}) and O~(n^{1/3}) rounds for D=3 and 4 respectively, which almost close the long-standing complexity gap of the MST construction in small-diameter graphs originally posed by Lotker et al. [Distributed Computing 2006]. (3) We show that bounding clique-width does not help the construction of good shortcuts by presenting a network topology of clique-width six where the construction of MST is as expensive as the general case

    Optimal Algorithms for Non-Smooth Distributed Optimization in Networks

    Full text link
    In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in O(1/t)O(1/\sqrt{t}), the structure of the communication network only impacts a second-order term in O(1/t)O(1/t), where tt is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a d1/4d^{1/4} multiplicative factor of the optimal convergence rate, where dd is the underlying dimension.Comment: 17 page

    Fast Distributed Computation of Distances in Networks

    Get PDF
    This paper presents a distributed algorithm to simultaneously compute the diameter, radius and node eccentricity in all nodes of a synchronous network. Such topological information may be useful as input to configure other algorithms. Previous approaches have been modular, progressing in sequential phases using building blocks such as BFS tree construction, thus incurring longer executions than strictly required. We present an algorithm that, by timely propagation of available estimations, achieves a faster convergence to the correct values. We show local criteria for detecting convergence in each node. The algorithm avoids the creation of BFS trees and simply manipulates sets of node ids and hop counts. For the worst scenario of variable start times, each node i with eccentricity ecc(i) can compute: the node eccentricity in diam(G)+ecc(i)+2 rounds; the diameter in 2*diam(G)+ecc(i)+2 rounds; and the radius in diam(G)+ecc(i)+2*radius(G) rounds.Comment: 12 page
    • …
    corecore