102,641 research outputs found

    Improved Hardness for Cut, Interdiction, and Firefighter Problems

    Get PDF
    We study variants of the classic s-t cut problem and prove the following improved hardness results assuming the Unique Games Conjecture (UGC). * For Length-Bounded Cut and Shortest Path Interdiction, we show that both problems are hard to approximate within any constant factor, even if we allow bicriteria approximation. If we want to cut vertices or the graph is directed, our hardness ratio for Length-Bounded Cut matches the best approximation ratio up to a constant. Previously, the best hardness ratio was 1.1377 for Length-Bounded Cut and 2 for Shortest Path Interdiction. * For any constant k >= 2 and epsilon > 0, we show that Directed Multicut with k source-sink pairs is hard to approximate within a factor k - epsilon. This matches the trivial k-approximation algorithm. By a simple reduction, our result for k = 2 implies that Directed Multiway Cut with two terminals (also known as s-t Bicut} is hard to approximate within a factor 2 - epsilon, matching the trivial 2-approximation algorithm. * Assuming a variant of the UGC (implied by another variant of Bansal and Khot), we prove that it is hard to approximate Resource Minimization Fire Containment within any constant factor. Previously, the best hardness ratio was 2. For directed layered graphs with b layers, our hardness ratio Omega(log b) matches the best approximation algorithm. Our results are based on a general method of converting an integrality gap instance to a length-control dictatorship test for variants of the s-t cut problem, which may be useful for other problems

    Directed Multicut with linearly ordered terminals

    Full text link
    Motivated by an application in network security, we investigate the following "linear" case of Directed Mutlicut. Let GG be a directed graph which includes some distinguished vertices t1,,tkt_1, \ldots, t_k. What is the size of the smallest edge cut which eliminates all paths from tit_i to tjt_j for all i<ji < j? We show that this problem is fixed-parameter tractable when parametrized in the cutset size pp via an algorithm running in O(4ppn4)O(4^p p n^4) time.Comment: 12 pages, 1 figur

    Pseudo-Separation for Assessment of Structural Vulnerability of a Network

    Full text link
    Based upon the idea that network functionality is impaired if two nodes in a network are sufficiently separated in terms of a given metric, we introduce two combinatorial \emph{pseudocut} problems generalizing the classical min-cut and multi-cut problems. We expect the pseudocut problems will find broad relevance to the study of network reliability. We comprehensively analyze the computational complexity of the pseudocut problems and provide three approximation algorithms for these problems. Motivated by applications in communication networks with strict Quality-of-Service (QoS) requirements, we demonstrate the utility of the pseudocut problems by proposing a targeted vulnerability assessment for the structure of communication networks using QoS metrics; we perform experimental evaluations of our proposed approximation algorithms in this context

    Network Design Problems with Bounded Distances via Shallow-Light Steiner Trees

    Get PDF
    In a directed graph GG with non-correlated edge lengths and costs, the \emph{network design problem with bounded distances} asks for a cost-minimal spanning subgraph subject to a length bound for all node pairs. We give a bi-criteria (2+ε,O(n0.5+ε))(2+\varepsilon,O(n^{0.5+\varepsilon}))-approximation for this problem. This improves on the currently best known linear approximation bound, at the cost of violating the distance bound by a factor of at most~2+ε2+\varepsilon. In the course of proving this result, the related problem of \emph{directed shallow-light Steiner trees} arises as a subproblem. In the context of directed graphs, approximations to this problem have been elusive. We present the first non-trivial result by proposing a (1+ε,O(Rε))(1+\varepsilon,O(|R|^{\varepsilon}))-ap\-proxi\-ma\-tion, where RR are the terminals. Finally, we show how to apply our results to obtain an (α+ε,O(n0.5+ε))(\alpha+\varepsilon,O(n^{0.5+\varepsilon}))-approximation for \emph{light-weight directed α\alpha-spanners}. For this, no non-trivial approximation algorithm has been known before. All running times depends on nn and ε\varepsilon and are polynomial in nn for any fixed ε>0\varepsilon>0

    Approximability of Connected Factors

    Get PDF
    Finding a d-regular spanning subgraph (or d-factor) of a graph is easy by Tutte's reduction to the matching problem. By the same reduction, it is easy to find a minimal or maximal d-factor of a graph. However, if we require that the d-factor is connected, these problems become NP-hard - finding a minimal connected 2-factor is just the traveling salesman problem (TSP). Given a complete graph with edge weights that satisfy the triangle inequality, we consider the problem of finding a minimal connected dd-factor. We give a 3-approximation for all dd and improve this to an (r+1)-approximation for even d, where r is the approximation ratio of the TSP. This yields a 2.5-approximation for even d. The same algorithm yields an (r+1)-approximation for the directed version of the problem, where r is the approximation ratio of the asymmetric TSP. We also show that none of these minimization problems can be approximated better than the corresponding TSP. Finally, for the decision problem of deciding whether a given graph contains a connected d-factor, we extend known hardness results.Comment: To appear in the proceedings of WAOA 201

    The Densest k-Subhypergraph Problem

    Get PDF
    The Densest kk-Subgraph (DkkS) problem, and its corresponding minimization problem Smallest pp-Edge Subgraph (SppES), have come to play a central role in approximation algorithms. This is due both to their practical importance, and their usefulness as a tool for solving and establishing approximation bounds for other problems. These two problems are not well understood, and it is widely believed that they do not an admit a subpolynomial approximation ratio (although the best known hardness results do not rule this out). In this paper we generalize both DkkS and SppES from graphs to hypergraphs. We consider the Densest kk-Subhypergraph problem (given a hypergraph (V,E)(V, E), find a subset WVW\subseteq V of kk vertices so as to maximize the number of hyperedges contained in WW) and define the Minimum pp-Union problem (given a hypergraph, choose pp of the hyperedges so as to minimize the number of vertices in their union). We focus in particular on the case where all hyperedges have size 3, as this is the simplest non-graph setting. For this case we provide an O(n4(43)/13+ϵ)O(n0.697831+ϵ)O(n^{4(4-\sqrt{3})/13 + \epsilon}) \leq O(n^{0.697831+\epsilon})-approximation (for arbitrary constant ϵ>0\epsilon > 0) for Densest kk-Subhypergraph and an O~(n2/5)\tilde O(n^{2/5})-approximation for Minimum pp-Union. We also give an O(m)O(\sqrt{m})-approximation for Minimum pp-Union in general hypergraphs. Finally, we examine the interesting special case of interval hypergraphs (instances where the vertices are a subset of the natural numbers and the hyperedges are intervals of the line) and prove that both problems admit an exact polynomial time solution on these instances.Comment: 21 page

    Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal

    Full text link
    The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most kk of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a \BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed kk. It is known that this implies a polynomial-time compression algorithm that turns OCT instances into equivalent instances of size at most \BigOh(4^k), a so-called kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in kk, has turned into one of the main open questions in the study of kernelization. This work provides the first (randomized) polynomial kernelization for OCT. We introduce a novel kernelization approach based on matroid theory, where we encode all relevant information about a problem instance into a matroid with a representation of size polynomial in kk. For OCT, the matroid is built to allow us to simulate the computation of the iterative compression step of the algorithm of Reed, Smith, and Vetta, applied (for only one round) to an approximate odd cycle transversal which it is aiming to shrink to size kk. The process is randomized with one-sided error exponentially small in kk, where the result can contain false positives but no false negatives, and the size guarantee is cubic in the size of the approximate solution. Combined with an \BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a reduction of the instance to size \BigOh(k^{4.5}), implying a randomized polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape
    corecore