232 research outputs found

    The cavity approach for Steiner trees packing problems

    Full text link
    The Belief Propagation approximation, or cavity method, has been recently applied to several combinatorial optimization problems in its zero-temperature implementation, the max-sum algorithm. In particular, recent developments to solve the edge-disjoint paths problem and the prize-collecting Steiner tree problem on graphs have shown remarkable results for several classes of graphs and for benchmark instances. Here we propose a generalization of these techniques for two variants of the Steiner trees packing problem where multiple "interacting" trees have to be sought within a given graph. Depending on the interaction among trees we distinguish the vertex-disjoint Steiner trees problem, where trees cannot share nodes, from the edge-disjoint Steiner trees problem, where edges cannot be shared by trees but nodes can be members of multiple trees. Several practical problems of huge interest in network design can be mapped into these two variants, for instance, the physical design of Very Large Scale Integration (VLSI) chips. The formalism described here relies on two components edge-variables that allows us to formulate a massage-passing algorithm for the V-DStP and two algorithms for the E-DStP differing in the scaling of the computational time with respect to some relevant parameters. We will show that one of the two formalisms used for the edge-disjoint variant allow us to map the max-sum update equations into a weighted maximum matching problem over proper bipartite graphs. We developed a heuristic procedure based on the max-sum equations that shows excellent performance in synthetic networks (in particular outperforming standard multi-step greedy procedures by large margins) and on large benchmark instances of VLSI for which the optimal solution is known, on which the algorithm found the optimum in two cases and the gap to optimality was never larger than 4 %

    Parameterized Complexity Dichotomy for Steiner Multicut

    Get PDF
    The Steiner Multicut problem asks, given an undirected graph G, terminals sets T1,...,Tt \subseteq V(G) of size at most p, and an integer k, whether there is a set S of at most k edges or nodes s.t. of each set Ti at least one pair of terminals is in different connected components of G \ S. This problem generalizes several graph cut problems, in particular the Multicut problem (the case p = 2), which is fixed-parameter tractable for the parameter k [Marx and Razgon, Bousquet et al., STOC 2011]. We provide a dichotomy of the parameterized complexity of Steiner Multicut. That is, for any combination of k, t, p, and the treewidth tw(G) as constant, parameter, or unbounded, and for all versions of the problem (edge deletion and node deletion with and without deletable terminals), we prove either that the problem is fixed-parameter tractable or that the problem is hard (W[1]-hard or even (para-)NP-complete). We highlight that: - The edge deletion version of Steiner Multicut is fixed-parameter tractable for the parameter k+t on general graphs (but has no polynomial kernel, even on trees). We present two proofs: one using the randomized contractions technique of Chitnis et al, and one relying on new structural lemmas that decompose the Steiner cut into important separators and minimal s-t cuts. - In contrast, both node deletion versions of Steiner Multicut are W[1]-hard for the parameter k+t on general graphs. - All versions of Steiner Multicut are W[1]-hard for the parameter k, even when p=3 and the graph is a tree plus one node. Hence, the results of Marx and Razgon, and Bousquet et al. do not generalize to Steiner Multicut. Since we allow k, t, p, and tw(G) to be any constants, our characterization includes a dichotomy for Steiner Multicut on trees (for tw(G) = 1), and a polynomial time versus NP-hardness dichotomy (by restricting k,t,p,tw(G) to constant or unbounded).Comment: As submitted to journal. This version also adds a proof of fixed-parameter tractability for parameter k+t using the technique of randomized contraction

    Hardness results and approximation algorithms for some problems on graphs

    Get PDF
    This thesis has two parts. In the first part, we study some graph covering problems with a non-local covering rule that allows a "remote" node to be covered by repeatedly applying the covering rule. In the second part, we provide some results on the packing of Steiner trees. In the Propagation problem we are given a graph GG and the goal is to find a minimum-sized set of nodes SS that covers all of the nodes, where a node vv is covered if (1) vv is in SS, or (2) vv has a neighbor uu such that uu and all of its neighbors except vv are covered. Rule (2) is called the propagation rule, and it is applied iteratively. Throughout, we use nn to denote the number of nodes in the input graph. We prove that the path-width parameter is a lower bound for the optimal value. We show that the Propagation problem is NP-hard in planar weighted graphs. We prove that it is NP-hard to approximate the optimal value to within a factor of 2log1ϵn2^{\log^{1-\epsilon}{n}} in weighted (general) graphs. The second problem that we study is the Power Dominating Set problem. This problem has two covering rules. The first rule is the same as the domination rule as in the Dominating Set problem, and the second rule is the same propagation rule as in the Propagation problem. We show that it is hard to approximate the optimal value to within a factor of 2log1ϵn2^{\log^{1-\epsilon}{n}} in general graphs. We design and analyze an approximation algorithm with a performance guarantee of O(n)O(\sqrt{n}) on planar graphs. We formulate a common generalization of the above two problems called the General Propagation problem. We reformulate this general problem as an orientation problem, and based on this reformulation we design a dynamic programming algorithm. The algorithm runs in linear time when the graph has tree-width O(1)O(1). Motivated by applications, we introduce a restricted version of the problem that we call the \ell-round General Propagation problem. We give a PTAS for the \ell-round General Propagation problem on planar graphs, for small values of \ell. Our dynamic programming algorithms and the PTAS can be extended to other problems in networks with similar propagation rules. As an example we discuss the extension of our results to the Target Set Selection problem in the threshold model of the diffusion processes. In the second part of the thesis, we focus on the Steiner Tree Packing problem. In this problem, we are given a graph GG and a subset of terminal nodes RV(G)R\subseteq V(G). The goal in this problem is to find a maximum cardinality set of disjoint trees that each spans RR, that is, each of the trees should contain all terminal nodes. In the edge-disjoint version of this problem, the trees have to be edge disjoint. In the element-disjoint version, the trees have to be node disjoint on non-terminal nodes and edge-disjoint on edges adjacent to terminals. We show that both problems are NP-hard when there are only 33 terminals. Our main focus is on planar instances of these problems. We show that the edge-disjoint version of the problem is NP-hard even in planar graphs with 33 terminals on the same face of the embedding. Next, we design an algorithm that achieves an approximation guarantee of 121k\frac{1}{2}-\frac{1}{k}, given a planar graph that is kk element-connected on the terminals; in fact, given such a graph the algorithm returns k/21k/2-1 element-disjoint Steiner trees. Using this algorithm we get an approximation algorithm with guarantee of (almost) 44 for the edge-disjoint version of the problem in planar graphs. We also show that the natural LP relaxation of the edge-disjoint Steiner Tree Packing problem has an integrality ratio of 2ϵ2-\epsilon in planar graphs

    Thresholded Covering Algorithms for Robust and Max-Min Optimization

    Full text link
    The general problem of robust optimization is this: one of several possible scenarios will appear tomorrow, but things are more expensive tomorrow than they are today. What should you anticipatorily buy today, so that the worst-case cost (summed over both days) is minimized? Feige et al. and Khandekar et al. considered the k-robust model where the possible outcomes tomorrow are given by all demand-subsets of size k, and gave algorithms for the set cover problem, and the Steiner tree and facility location problems in this model, respectively. In this paper, we give the following simple and intuitive template for k-robust problems: "having built some anticipatory solution, if there exists a single demand whose augmentation cost is larger than some threshold, augment the anticipatory solution to cover this demand as well, and repeat". In this paper we show that this template gives us improved approximation algorithms for k-robust Steiner tree and set cover, and the first approximation algorithms for k-robust Steiner forest, minimum-cut and multicut. All our approximation ratios (except for multicut) are almost best possible. As a by-product of our techniques, we also get algorithms for max-min problems of the form: "given a covering problem instance, which k of the elements are costliest to cover?".Comment: 24 page

    On Approximability of Steiner Tree in p\ell_p-metrics

    Full text link
    In the Continuous Steiner Tree problem (CST), we are given as input a set of points (called terminals) in a metric space and ask for the minimum-cost tree connecting them. Additional points (called Steiner points) from the metric space can be introduced as nodes in the solution. In the Discrete Steiner Tree problem (DST), we are given in addition to the terminals, a set of facilities, and any solution tree connecting the terminals can only contain the Steiner points from this set of facilities. Trevisan [SICOMP'00] showed that CST and DST are APX-hard when the input lies in the 1\ell_1-metric (and Hamming metric). Chleb\'ik and Chleb\'ikov\'a [TCS'08] showed that DST is NP-hard to approximate to factor of 96/951.0196/95\approx 1.01 in the graph metric (and consequently \ell_\infty-metric). Prior to this work, it was unclear if CST and DST are APX-hard in essentially every other popular metric! In this work, we prove that DST is APX-hard in every p\ell_p-metric. We also prove that CST is APX-hard in the \ell_{\infty}-metric. Finally, we relate CST and DST, showing a general reduction from CST to DST in p\ell_p-metrics. As an immediate consequence, this yields a 1.391.39-approximation polynomial time algorithm for CST in p\ell_p-metrics.Comment: Abstract shortened due to arxiv's requirement

    Isolating Cuts, (Bi-)Submodularity, and Faster Algorithms for Connectivity

    Get PDF
    Li and Panigrahi [Jason Li and Debmalya Panigrahi, 2020], in recent work, obtained the first deterministic algorithm for the global minimum cut of a weighted undirected graph that runs in time o(mn). They introduced an elegant and powerful technique to find isolating cuts for a terminal set in a graph via a small number of s-t minimum cut computations. In this paper we generalize their isolating cut approach to the abstract setting of symmetric bisubmodular functions (which also capture symmetric submodular functions). Our generalization to bisubmodularity is motivated by applications to element connectivity and vertex connectivity. Utilizing the general framework and other ideas we obtain significantly faster randomized algorithms for computing global (and subset) connectivity in a number of settings including hypergraphs, element connectivity and vertex connectivity in graphs, and for symmetric submodular functions

    Statistical mechanics approaches to optimization and inference

    Get PDF
    Nowadays, typical methodologies employed in statistical physics are successfully applied to a huge set of problems arising from different research fields. In this thesis I will propose several statistical mechanics based models able to deal with two types of problems: optimization and inference problems. The intrinsic difficulty that characterizes both problems is that, due to the hard combinatorial nature of optimization and inference, finding exact solutions would require hard and impractical computations. In fact, the time needed to perform these calculations, in almost all cases, scales exponentially with respect to relevant parameters of the system and thus cannot be accomplished in practice. As combinatorial optimization addresses the problem of finding a fair configuration of variables able to minimize/maximize an objective function, inference seeks a posteriori the most fair assignment of a set of variables given a partial knowledge of the system. These two problems can be re-phrased in a statistical mechanics framework where elementary components of a physical system interact according to the constraints of the original problem. The information at our disposal can be encoded in the Boltzmann distribution of the new variables which, if properly investigated, can provide the solutions to the original problems. As a consequence, the methodologies originally adopted in statistical mechanics to study and, eventually, approximate the Boltzmann distribution can be fruitfully applied for solving inference and optimization problems. The structure of the thesis follows the path covered during the three years of my Ph.D. At first, I will propose a set of combinatorial optimization problems on graphs, the Prize collecting and the Packing of Steiner trees problems. The tools used to face these hard problems rely on the zero-temperature implementation of the Belief Propagation algorithm, called Max Sum algorithm. The second set of problems proposed in this thesis falls under the name of linear estimation problems. One of them, the compressed sensing problem, will guide us in the modelling of these problems within a Bayesian framework along with the introduction of a powerful algorithm known as Expectation Propagation or Expectation Consistent in statistical physics. I will propose a similar approach to other challenging problems: the inference of metabolic fluxes, the inverse problem of the electro-encephalography and the reconstruction of tomographic images

    Greedy Algorithms for Online Survivable Network Design

    Get PDF
    In an instance of the network design problem, we are given a graph G=(V,E), an edge-cost function c:E -> R^{>= 0}, and a connectivity criterion. The goal is to find a minimum-cost subgraph H of G that meets the connectivity requirements. An important family of this class is the survivable network design problem (SNDP): given non-negative integers r_{u v} for each pair u,v in V, the solution subgraph H should contain r_{u v} edge-disjoint paths for each pair u and v. While this problem is known to admit good approximation algorithms in the offline case, the problem is much harder in the online setting. Gupta, Krishnaswamy, and Ravi [Gupta et al., 2012] (STOC\u2709) are the first to consider the online survivable network design problem. They demonstrate an algorithm with competitive ratio of O(k log^3 n), where k=max_{u,v} r_{u v}. Note that the competitive ratio of the algorithm by Gupta et al. grows linearly in k. Since then, an important open problem in the online community [Naor et al., 2011; Gupta et al., 2012] is whether the linear dependence on k can be reduced to a logarithmic dependency. Consider an online greedy algorithm that connects every demand by adding a minimum cost set of edges to H. Surprisingly, we show that this greedy algorithm significantly improves the competitive ratio when a congestion of 2 is allowed on the edges or when the model is stochastic. While our algorithm is fairly simple, our analysis requires a deep understanding of k-connected graphs. In particular, we prove that the greedy algorithm is O(log^2 n log k)-competitive if one satisfies every demand between u and v by r_{uv}/2 edge-disjoint paths. The spirit of our result is similar to the work of Chuzhoy and Li [Chuzhoy and Li, 2012] (FOCS\u2712), in which the authors give a polylogarithmic approximation algorithm for edge-disjoint paths with congestion 2. Moreover, we study the greedy algorithm in the online stochastic setting. We consider the i.i.d. model, where each online demand is drawn from a single probability distribution, the unknown i.i.d. model, where every demand is drawn from a single but unknown probability distribution, and the prophet model in which online demands are drawn from (possibly) different probability distributions. Through a different analysis, we prove that a similar greedy algorithm is constant competitive for the i.i.d. and the prophet models. Also, the greedy algorithm is O(log n)-competitive for the unknown i.i.d. model, which is almost tight due to the lower bound of [Garg et al., 2008] for single connectivity
    corecore