13,577 research outputs found

    On Strong Diameter Padded Decompositions

    Get PDF
    Given a weighted graph G=(V,E,w), a partition of V is Delta-bounded if the diameter of each cluster is bounded by Delta. A distribution over Delta-bounded partitions is a beta-padded decomposition if every ball of radius gamma Delta is contained in a single cluster with probability at least e^{-beta * gamma}. The weak diameter of a cluster C is measured w.r.t. distances in G, while the strong diameter is measured w.r.t. distances in the induced graph G[C]. The decomposition is weak/strong according to the diameter guarantee. Formerly, it was proven that K_r free graphs admit weak decompositions with padding parameter O(r), while for strong decompositions only O(r^2) padding parameter was known. Furthermore, for the case of a graph G, for which the induced shortest path metric d_G has doubling dimension ddim, a weak O(ddim)-padded decomposition was constructed, which is also known to be tight. For the case of strong diameter, nothing was known. We construct strong O(r)-padded decompositions for K_r free graphs, matching the state of the art for weak decompositions. Similarly, for graphs with doubling dimension ddim we construct a strong O(ddim)-padded decomposition, which is also tight. We use this decomposition to construct (O(ddim),O~(ddim))-sparse cover scheme for such graphs. Our new decompositions and cover have implications to approximating unique games, the construction of light and sparse spanners, and for path reporting distance oracles

    The Complexity of Partial Function Extension for Coverage Functions

    Get PDF
    Coverage functions are an important subclass of submodular functions, finding applications in machine learning, game theory, social networks, and facility location. We study the complexity of partial function extension to coverage functions. That is, given a partial function consisting of a family of subsets of [m] and a value at each point, does there exist a coverage function defined on all subsets of [m] that extends this partial function? Partial function extension is previously studied for other function classes, including boolean functions and convex functions, and is useful in many fields, such as obtaining bounds on learning these function classes. We show that determining extendibility of a partial function to a coverage function is NP-complete, establishing in the process that there is a polynomial-sized certificate of extendibility. The hardness also gives us a lower bound for learning coverage functions. We then study two natural notions of approximate extension, to account for errors in the data set. The two notions correspond roughly to multiplicative point-wise approximation and additive L_1 approximation. We show upper and lower bounds for both notions of approximation. In the second case we obtain nearly tight bounds

    Towards Distributed Two-Stage Stochastic Optimization

    Get PDF
    The weighted vertex cover problem is concerned with selecting a subset of the vertices that covers a target set of edges with the objective of minimizing the total cost of the selected vertices. We consider a variant of this classic combinatorial optimization problem where the target edge set is not fully known; rather, it is characterized by a probability distribution. Adhering to the model of two-stage stochastic optimization, the execution is divided into two stages so that in the first stage, the decision maker selects some of the vertices based on the probabilistic forecast of the target edge set. Then, in the second stage, the edges in the target set are revealed and in order to cover them, the decision maker can augment the vertex subset selected in the first stage with additional vertices. However, in the second stage, the vertex cost increases by some inflation factor, so the second stage selection becomes more expensive. The current paper studies the two-stage stochastic vertex cover problem in the realm of distributed graph algorithms, where the decision making process (in both stages) is distributed among the vertices of the graph. By combining the stochastic optimization toolbox with recent advances in distributed algorithms for weighted vertex cover, we develop an algorithm that runs in time O(log (?) / ?), sends O(m) messages in total, and guarantees to approximate the optimal solution within a (3 + ?)-ratio, where m is the number of edges in the graph, ? is its maximum degree, and 0 < ? < 1 is a performance parameter

    Towards a better approximation for sparsest cut?

    Full text link
    We give a new (1+ϵ)(1+\epsilon)-approximation for sparsest cut problem on graphs where small sets expand significantly more than the sparsest cut (sets of size n/rn/r expand by a factor lognlogr\sqrt{\log n\log r} bigger, for some small rr; this condition holds for many natural graph families). We give two different algorithms. One involves Guruswami-Sinop rounding on the level-rr Lasserre relaxation. The other is combinatorial and involves a new notion called {\em Small Set Expander Flows} (inspired by the {\em expander flows} of ARV) which we show exists in the input graph. Both algorithms run in time 2O(r)poly(n)2^{O(r)} \mathrm{poly}(n). We also show similar approximation algorithms in graphs with genus gg with an analogous local expansion condition. This is the first algorithm we know of that achieves (1+ϵ)(1+\epsilon)-approximation on such general family of graphs

    Structural Rounding: Approximation Algorithms for Graphs Near an Algorithmically Tractable Class

    Get PDF
    We develop a framework for generalizing approximation algorithms from the structural graph algorithm literature so that they apply to graphs somewhat close to that class (a scenario we expect is common when working with real-world networks) while still guaranteeing approximation ratios. The idea is to edit a given graph via vertex- or edge-deletions to put the graph into an algorithmically tractable class, apply known approximation algorithms for that class, and then lift the solution to apply to the original graph. We give a general characterization of when an optimization problem is amenable to this approach, and show that it includes many well-studied graph problems, such as Independent Set, Vertex Cover, Feedback Vertex Set, Minimum Maximal Matching, Chromatic Number, (l-)Dominating Set, Edge (l-)Dominating Set, and Connected Dominating Set. To enable this framework, we develop new editing algorithms that find the approximately-fewest edits required to bring a given graph into one of a few important graph classes (in some cases these are bicriteria algorithms which simultaneously approximate both the number of editing operations and the target parameter of the family). For bounded degeneracy, we obtain an O(r log{n})-approximation and a bicriteria (4,4)-approximation which also extends to a smoother bicriteria trade-off. For bounded treewidth, we obtain a bicriteria (O(log^{1.5} n), O(sqrt{log w}))-approximation, and for bounded pathwidth, we obtain a bicriteria (O(log^{1.5} n), O(sqrt{log w} * log n))-approximation. For treedepth 2 (related to bounded expansion), we obtain a 4-approximation. We also prove complementary hardness-of-approximation results assuming P != NP: in particular, these problems are all log-factor inapproximable, except the last which is not approximable below some constant factor 2 (assuming UGC)

    The Minimum Wiener Connector

    Full text link
    The Wiener index of a graph is the sum of all pairwise shortest-path distances between its vertices. In this paper we study the novel problem of finding a minimum Wiener connector: given a connected graph G=(V,E)G=(V,E) and a set QVQ\subseteq V of query vertices, find a subgraph of GG that connects all query vertices and has minimum Wiener index. We show that The Minimum Wiener Connector admits a polynomial-time (albeit impractical) exact algorithm for the special case where the number of query vertices is bounded. We show that in general the problem is NP-hard, and has no PTAS unless P=NP\mathbf{P} = \mathbf{NP}. Our main contribution is a constant-factor approximation algorithm running in time O~(QE)\widetilde{O}(|Q||E|). A thorough experimentation on a large variety of real-world graphs confirms that our method returns smaller and denser solutions than other methods, and does so by adding to the query set QQ a small number of important vertices (i.e., vertices with high centrality).Comment: Published in Proceedings of the 2015 ACM SIGMOD International Conference on Management of Dat

    Optimal Distributed Covering Algorithms

    Get PDF
    We present a time-optimal deterministic distributed algorithm for approximating a minimum weight vertex cover in hypergraphs of rank f. This problem is equivalent to the Minimum Weight Set Cover problem in which the frequency of every element is bounded by f. The approximation factor of our algorithm is (f+epsilon). Let Delta denote the maximum degree in the hypergraph. Our algorithm runs in the congest model and requires O(log{Delta} / log log Delta) rounds, for constants epsilon in (0,1] and f in N^+. This is the first distributed algorithm for this problem whose running time does not depend on the vertex weights nor the number of vertices. Thus adding another member to the exclusive family of provably optimal distributed algorithms. For constant values of f and epsilon, our algorithm improves over the (f+epsilon)-approximation algorithm of [Fabian Kuhn et al., 2006] whose running time is O(log Delta + log W), where W is the ratio between the largest and smallest vertex weights in the graph. Our algorithm also achieves an f-approximation for the problem in O(f log n) rounds, improving over the classical result of [Samir Khuller et al., 1994] that achieves a running time of O(f log^2 n). Finally, for weighted vertex cover (f=2) our algorithm achieves a deterministic running time of O(log n), matching the randomized previously best result of [Koufogiannakis and Young, 2011]. We also show that integer covering-programs can be reduced to the Minimum Weight Set Cover problem in the distributed setting. This allows us to achieve an (f+epsilon)-approximate integral solution in O((1+f/log n)* ((log Delta)/(log log Delta) + (f * log M)^{1.01}* log epsilon^{-1}* (log Delta)^{0.01})) rounds, where f bounds the number of variables in a constraint, Delta bounds the number of constraints a variable appears in, and M=max {1, ceil[1/a_{min}]}, where a_{min} is the smallest normalized constraint coefficient. This improves over the results of [Fabian Kuhn et al., 2006] for the integral case, which combined with rounding achieves the same guarantees in O(epsilon^{-4}* f^4 * log f * log(M * Delta)) rounds
    corecore