2,796 research outputs found

    Faster graph algorithms via switching classes

    Get PDF
    2012 Summer.Includes bibliographical references.The runtime of an algorithm is intimately related to how an instance is represented. Recall that the runtimes of the first generation of graph algorithms were expressed as functions of n := |V|. This analysis was natural since at this time graphs were represented in n2 space via their adjacency matrix. It was soon noticed that if m := |E| = o(n2), then a variety of graph algorithms could be sped-up by computing the adjacency-list from the adjacency matrix, then running the algorithm on the more efficient adjacency-list representation. This motivated the introduction of m to the runtime of graph algorithms and it is now customary in algorithm design to assume that a graph instance is given in the form of its adjacency-list. For instance, a graph algorithm is not considered to run in linear time unless it runs in O(n + m) time. An O(n2) bound is not considered linear, even though the two bounds are the same in the worst case. Let m͂ be the size of the minimum representative of a graph G's switching class (w.r.t. to some switching operation). It is shown that better bounds for several classical graph algorithms can be obtained by modifying them so that their running time is a function of n+m͂ rather than of n+m. This is significant because m͂ is O(m) but m is not O(m͂). This is accomplished by first computing the so-called partially complemented adjacency list (pc-list) from an adjacency list, then designing an algorithm that is amenable to the more efficient pc-list representation. The pc-list data-structure is generalization of the adjacency list that has a natural correspondence to switching classes. Using this approach, better bounds are obtained for bipartite maximum matching, graph diameter, and vertex-weighted all-pairs shortest path

    A 1.751.75 LP approximation for the Tree Augmentation Problem

    Full text link
    In the Tree Augmentation Problem (TAP) the goal is to augment a tree TT by a minimum size edge set FF from a given edge set EE such that TFT \cup F is 22-edge-connected. The best approximation ratio known for TAP is 1.51.5. In the more general Weighted TAP problem, FF should be of minimum weight. Weighted TAP admits several 22-approximation algorithms w.r.t. to the standard cut LP-relaxation, but for all of them the performance ratio of 22 is tight even for TAP. The problem is equivalent to the problem of covering a laminar set family. Laminar set families play an important role in the design of approximation algorithms for connectivity network design problems. In fact, Weighted TAP is the simplest connectivity network design problem for which a ratio better than 22 is not known. Improving this "natural" ratio is a major open problem, which may have implications on many other network design problems. It seems that achieving this goal requires finding an LP-relaxation with integrality gap better than 22, which is a long time open problem even for TAP. In this paper we introduce such an LP-relaxation and give an algorithm that computes a feasible solution for TAP of size at most 1.751.75 times the optimal LP value. This gives some hope to break the ratio 22 for the weighted case. Our algorithm computes some initial edge set by solving a partial system of constraints that form the integral edge-cover polytope, and then applies local search on 33-leaf subtrees to exchange some of the edges and to add additional edges. Thus we do not need to solve the LP, and the algorithm runs roughly in time required to find a minimum weight edge-cover in a general graph.Comment: arXiv admin note: substantial text overlap with arXiv:1507.0279

    Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication

    Full text link
    This paper proposes a novel class of distributed continuous-time coordination algorithms to solve network optimization problems whose cost function is a sum of local cost functions associated to the individual agents. We establish the exponential convergence of the proposed algorithm under (i) strongly connected and weight-balanced digraph topologies when the local costs are strongly convex with globally Lipschitz gradients, and (ii) connected graph topologies when the local costs are strongly convex with locally Lipschitz gradients. When the local cost functions are convex and the global cost function is strictly convex, we establish asymptotic convergence under connected graph topologies. We also characterize the algorithm's correctness under time-varying interaction topologies and study its privacy preservation properties. Motivated by practical considerations, we analyze the algorithm implementation with discrete-time communication. We provide an upper bound on the stepsize that guarantees exponential convergence over connected graphs for implementations with periodic communication. Building on this result, we design a provably-correct centralized event-triggered communication scheme that is free of Zeno behavior. Finally, we develop a distributed, asynchronous event-triggered communication scheme that is also free of Zeno with asymptotic convergence guarantees. Several simulations illustrate our results.Comment: 12 page

    Path-Contractions, Edge Deletions and Connectivity Preservation

    Get PDF
    We study several problems related to graph modification problems under connectivity constraints from the perspective of parameterized complexity: {\sc (Weighted) Biconnectivity Deletion}, where we are tasked with deleting~kk edges while preserving biconnectivity in an undirected graph, {\sc Vertex-deletion Preserving Strong Connectivity}, where we want to maintain strong connectivity of a digraph while deleting exactly~kk vertices, and {\sc Path-contraction Preserving Strong Connectivity}, in which the operation of path contraction on arcs is used instead. The parameterized tractability of this last problem was posed by Bang-Jensen and Yeo [DAM 2008] as an open question and we answer it here in the negative: both variants of preserving strong connectivity are W[1]\sf W[1]-hard. Preserving biconnectivity, on the other hand, turns out to be fixed parameter tractable and we provide a 2O(klogk)nO(1)2^{O(k\log k)} n^{O(1)}-algorithm that solves {\sc Weighted Biconnectivity Deletion}. Further, we show that the unweighted case even admits a randomized polynomial kernel. All our results provide further interesting data points for the systematic study of connectivity-preservation constraints in the parameterized setting
    corecore