10,328 research outputs found

    Parameterized (in)approximability of subset problems

    Full text link
    We discuss approximability and inapproximability in FPT-time for a large class of subset problems where a feasible solution SS is a subset of the input data and the value of SS is S|S|. The class handled encompasses many well-known graph, set, or satisfiability problems such as Dominating Set, Vertex Cover, Set Cover, Independent Set, Feedback Vertex Set, etc. In a first time, we introduce the notion of intersective approximability that generalizes the one of safe approximability and show strong parameterized inapproximability results for many of the subset problems handled. Then, we study approximability of these problems with respect to the dual parameter nkn-k where nn is the size of the instance and kk the standard parameter. More precisely, we show that under such a parameterization, many of these problems, while W[\cdot]-hard, admit parameterized approximation schemata.Comment: 7 page

    More applications of the d-neighbor equivalence: acyclicity and connectivity constraints

    Full text link
    In this paper, we design a framework to obtain efficient algorithms for several problems with a global constraint (acyclicity or connectivity) such as Connected Dominating Set, Node Weighted Steiner Tree, Maximum Induced Tree, Longest Induced Path, and Feedback Vertex Set. We design a meta-algorithm that solves all these problems and whose running time is upper bounded by 2O(k)nO(1)2^{O(k)}\cdot n^{O(1)}, 2O(klog(k))nO(1)2^{O(k \log(k))}\cdot n^{O(1)}, 2O(k2)nO(1)2^{O(k^2)}\cdot n^{O(1)} and nO(k)n^{O(k)} where kk is respectively the clique-width, Q\mathbb{Q}-rank-width, rank-width and maximum induced matching width of a given decomposition. Our meta-algorithm simplifies and unifies the known algorithms for each of the parameters and its running time matches asymptotically also the running times of the best known algorithms for basic NP-hard problems such as Vertex Cover and Dominating Set. Our framework is based on the dd-neighbor equivalence defined in [Bui-Xuan, Telle and Vatshelle, TCS 2013]. The results we obtain highlight the importance of this equivalence relation on the algorithmic applications of width measures. We also prove that our framework could be useful for W[1]W[1]-hard problems parameterized by clique-width such as Max Cut and Maximum Minimal Cut. For these latter problems, we obtain nO(k)n^{O(k)}, nO(k)n^{O(k)} and n2O(k)n^{2^{O(k)}} time algorithms where kk is respectively the clique-width, the Q\mathbb{Q}-rank-width and the rank-width of the input graph

    More Applications of the d-Neighbor Equivalence: Connectivity and Acyclicity Constraints

    Get PDF
    In this paper, we design a framework to obtain efficient algorithms for several problems with a global constraint (acyclicity or connectivity) such as Connected Dominating Set, Node Weighted Steiner Tree, Maximum Induced Tree, Longest Induced Path, and Feedback Vertex Set. For all these problems, we obtain 2^O(k)* n^O(1), 2^O(k log(k))* n^O(1), 2^O(k^2) * n^O(1) and n^O(k) time algorithms parameterized respectively by clique-width, Q-rank-width, rank-width and maximum induced matching width. Our approach simplifies and unifies the known algorithms for each of the parameters and match asymptotically also the running time of the best algorithms for basic NP-hard problems such as Vertex Cover and Dominating Set. Our framework is based on the d-neighbor equivalence defined in [Bui-Xuan, Telle and Vatshelle, TCS 2013]. The results we obtain highlight the importance and the generalizing power of this equivalence relation on width measures. We also prove that this equivalence relation could be useful for Max Cut: a W[1]-hard problem parameterized by clique-width. For this latter problem, we obtain n^O(k), n^O(k) and n^(2^O(k)) time algorithm parameterized by clique-width, Q-rank-width and rank-width

    The Price of Information in Combinatorial Optimization

    Full text link
    Consider a network design application where we wish to lay down a minimum-cost spanning tree in a given graph; however, we only have stochastic information about the edge costs. To learn the precise cost of any edge, we have to conduct a study that incurs a price. Our goal is to find a spanning tree while minimizing the disutility, which is the sum of the tree cost and the total price that we spend on the studies. In a different application, each edge gives a stochastic reward value. Our goal is to find a spanning tree while maximizing the utility, which is the tree reward minus the prices that we pay. Situations such as the above two often arise in practice where we wish to find a good solution to an optimization problem, but we start with only some partial knowledge about the parameters of the problem. The missing information can be found only after paying a probing price, which we call the price of information. What strategy should we adopt to optimize our expected utility/disutility? A classical example of the above setting is Weitzman's "Pandora's box" problem where we are given probability distributions on values of nn independent random variables. The goal is to choose a single variable with a large value, but we can find the actual outcomes only after paying a price. Our work is a generalization of this model to other combinatorial optimization problems such as matching, set cover, facility location, and prize-collecting Steiner tree. We give a technique that reduces such problems to their non-price counterparts, and use it to design exact/approximation algorithms to optimize our utility/disutility. Our techniques extend to situations where there are additional constraints on what parameters can be probed or when we can simultaneously probe a subset of the parameters.Comment: SODA 201

    On Structural Parameterizations of Hitting Set: Hitting Paths in Graphs Using 2-SAT

    Get PDF
    Hitting Set is a classic problem in combinatorial optimization. Its input consists of a set system F over a finite universe U and an integer t; the question is whether there is a set of t elements that intersects every set in F. The Hitting Set problem parameterized by the size of the solution is a well-known W[2]-complete problem in parameterized complexity theory. In this paper we investigate the complexity of Hitting Set under various structural parameterizations of the input. Our starting point is the folklore result that Hitting Set is polynomial-time solvable if there is a tree T on vertex set U such that the sets in F induce connected subtrees of T. We consider the case that there is a treelike graph with vertex set U such that the sets in F induce connected subgraphs; the parameter of the problem is a measure of how treelike the graph is. Our main positive result is an algorithm that, given a graph G with cyclomatic number k, a collection P of simple paths in G, and an integer t, determines in time 2^{5k} (|G| +|P|)^O(1) whether there is a vertex set of size t that hits all paths in P. It is based on a connection to the 2-SAT problem in multiple valued logic. For other parameterizations we derive W[1]-hardness and para-NP-completeness results.Comment: Presented at the 41st International Workshop on Graph-Theoretic Concepts in Computer Science, WG 2015. (The statement of Lemma 4 was corrected in this update.

    Hardness of Vertex Deletion and Project Scheduling

    Full text link
    Assuming the Unique Games Conjecture, we show strong inapproximability results for two natural vertex deletion problems on directed graphs: for any integer k2k\geq 2 and arbitrary small ϵ>0\epsilon > 0, the Feedback Vertex Set problem and the DAG Vertex Deletion problem are inapproximable within a factor kϵk-\epsilon even on graphs where the vertices can be almost partitioned into kk solutions. This gives a more structured and therefore stronger UGC-based hardness result for the Feedback Vertex Set problem that is also simpler (albeit using the "It Ain't Over Till It's Over" theorem) than the previous hardness result. In comparison to the classical Feedback Vertex Set problem, the DAG Vertex Deletion problem has received little attention and, although we think it is a natural and interesting problem, the main motivation for our inapproximability result stems from its relationship with the classical Discrete Time-Cost Tradeoff Problem. More specifically, our results imply that the deadline version is NP-hard to approximate within any constant assuming the Unique Games Conjecture. This explains the difficulty in obtaining good approximation algorithms for that problem and further motivates previous alternative approaches such as bicriteria approximations.Comment: 18 pages, 1 figur
    corecore