6,433 research outputs found

    An Approximate Kernel for Connected Feedback Vertex Set

    Get PDF
    The Feedback Vertex Set problem is a fundamental computational problem which has been the subject of intensive study in various domains of algorithmics. In this problem, one is given an undirected graph G and an integer k as input. The objective is to determine whether at most k vertices can be deleted from G such that the resulting graph is acyclic. The study of preprocessing algorithms for this problem has a long and rich history, culminating in the quadratic kernelization of Thomasse [SODA 2010]. However, it is known that when the solution is required to induce a connected subgraph (such a set is called a connected feedback vertex set), a polynomial kernelization is unlikely to exist and the problem is NP-hard to approximate below a factor of 2 (assuming the Unique Games Conjecture). In this paper, we show that if one is interested in only preserving approximate solutions (even of quality arbitrarily close to the optimum), then there is a drastic improvement in our ability to preprocess this problem. Specifically, we prove that for every fixed 0<epsilon<1, graph G, and k in N, the following holds: There is a polynomial time computable graph G\u27 of size k^O(1) such that for every c >= 1, any c-approximate connected feedback vertex set of G\u27 of size at most k is a c * (1+epsilon)-approximate connected feedback vertex set of G. Our result adds to the set of approximate kernelization algorithms introduced by Lokshtanov et al. [STOC 2017]. As a consequence of our main result, we show that Connected Feedback Vertex Set can be approximated within a factor min{OPT^O(1),n^(1-delta)} in polynomial time for some delta>0

    Approximate Turing Kernelization for Problems Parameterized by Treewidth

    Get PDF
    We extend the notion of lossy kernelization, introduced by Lokshtanov et al. [STOC 2017], to approximate Turing kernelization. An α\alpha-approximate Turing kernel for a parameterized optimization problem is a polynomial-time algorithm that, when given access to an oracle that outputs cc-approximate solutions in O(1)O(1) time, obtains an (αc)(\alpha \cdot c)-approximate solution to the considered problem, using calls to the oracle of size at most f(k)f(k) for some function ff that only depends on the parameter. Using this definition, we show that Independent Set parameterized by treewidth \ell has a (1+ε)(1+\varepsilon)-approximate Turing kernel with O(2ε)O(\frac{\ell^2}{\varepsilon}) vertices, answering an open question posed by Lokshtanov et al. [STOC 2017]. Furthermore, we give (1+ε)(1+\varepsilon)-approximate Turing kernels for the following graph problems parameterized by treewidth: Vertex Cover, Edge Clique Cover, Edge-Disjoint Triangle Packing and Connected Vertex Cover. We generalize the result for Independent Set and Vertex Cover, by showing that all graph problems that we will call "friendly" admit (1+ε)(1+\varepsilon)-approximate Turing kernels of polynomial size when parameterized by treewidth. We use this to obtain approximate Turing kernels for Vertex-Disjoint HH-packing for connected graphs HH, Clique Cover, Feedback Vertex Set and Edge Dominating Set

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I,k)(I',k') to the same problem, such that I+kkO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c1c \geq 1, a cc-approximate solution ss' to the pre-processed instance (I,k)(I',k') can be turned in polynomial time into a (cα)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NPcoNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α1\alpha \geq 1, unless NPcoNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    Tree Deletion Set has a Polynomial Kernel (but no OPT^O(1) approximation)

    Get PDF
    In the Tree Deletion Set problem the input is a graph G together with an integer k. The objective is to determine whether there exists a set S of at most k vertices such that G-S is a tree. The problem is NP-complete and even NP-hard to approximate within any factor of OPT^c for any constant c. In this paper we give a O(k^4) size kernel for the Tree Deletion Set problem. To the best of our knowledge our result is the first counterexample to the "conventional wisdom" that kernelization algorithms automatically provide approximation algorithms with approximation ratio close to the size of the kernel. An appealing feature of our kernelization algorithm is a new algebraic reduction rule that we use to handle the instances on which Tree Deletion Set is hard to approximate

    Feedback Vertex Set Inspired Kernel for Chordal Vertex Deletion

    Full text link
    Given a graph GG and a parameter kk, the Chordal Vertex Deletion (CVD) problem asks whether there exists a subset UV(G)U\subseteq V(G) of size at most kk that hits all induced cycles of size at least 4. The existence of a polynomial kernel for CVD was a well-known open problem in the field of Parameterized Complexity. Recently, Jansen and Pilipczuk resolved this question affirmatively by designing a polynomial kernel for CVD of size O(k161log58k)O(k^{161}\log^{58}k), and asked whether one can design a kernel of size O(k10)O(k^{10}). While we do not completely resolve this question, we design a significantly smaller kernel of size O(k12log10k)O(k^{12}\log^{10}k), inspired by the O(k2)O(k^2)-size kernel for Feedback Vertex Set. Furthermore, we introduce the notion of the independence degree of a vertex, which is our main conceptual contribution

    Meta-Kernelization using Well-Structured Modulators

    Get PDF
    Kernelization investigates exact preprocessing algorithms with performance guarantees. The most prevalent type of parameters used in kernelization is the solution size for optimization problems; however, also structural parameters have been successfully used to obtain polynomial kernels for a wide range of problems. Many of these parameters can be defined as the size of a smallest modulator of the given graph into a fixed graph class (i.e., a set of vertices whose deletion puts the graph into the graph class). Such parameters admit the construction of polynomial kernels even when the solution size is large or not applicable. This work follows up on the research on meta-kernelization frameworks in terms of structural parameters. We develop a class of parameters which are based on a more general view on modulators: instead of size, the parameters employ a combination of rank-width and split decompositions to measure structure inside the modulator. This allows us to lift kernelization results from modulator-size to more general parameters, hence providing smaller kernels. We show (i) how such large but well-structured modulators can be efficiently approximated, (ii) how they can be used to obtain polynomial kernels for any graph problem expressible in Monadic Second Order logic, and (iii) how they allow the extension of previous results in the area of structural meta-kernelization

    Hitting and Harvesting Pumpkins

    Full text link
    The "c-pumpkin" is the graph with two vertices linked by c>0 parallel edges. A c-pumpkin-model in a graph G is a pair A,B of disjoint subsets of vertices of G, each inducing a connected subgraph of G, such that there are at least c edges in G between A and B. We focus on covering and packing c-pumpkin-models in a given graph: On the one hand, we provide an FPT algorithm running in time 2^O(k) n^O(1) deciding, for any fixed c>0, whether all c-pumpkin-models can be covered by at most k vertices. This generalizes known single-exponential FPT algorithms for Vertex Cover and Feedback Vertex Set, which correspond to the cases c=1,2 respectively. On the other hand, we present a O(log n)-approximation algorithm for both the problems of covering all c-pumpkin-models with a smallest number of vertices, and packing a maximum number of vertex-disjoint c-pumpkin-models.Comment: v2: several minor change

    Parameterized Approximation Schemes for Steiner Trees with Small Number of Steiner Vertices

    Get PDF
    We study the Steiner Tree problem, in which a set of terminal vertices needs to be connected in the cheapest possible way in an edge-weighted graph. This problem has been extensively studied from the viewpoint of approximation and also parametrization. In particular, on one hand Steiner Tree is known to be APX-hard, and W[2]-hard on the other, if parameterized by the number of non-terminals (Steiner vertices) in the optimum solution. In contrast to this we give an efficient parameterized approximation scheme (EPAS), which circumvents both hardness results. Moreover, our methods imply the existence of a polynomial size approximate kernelization scheme (PSAKS) for the considered parameter. We further study the parameterized approximability of other variants of Steiner Tree, such as Directed Steiner Tree and Steiner Forest. For neither of these an EPAS is likely to exist for the studied parameter: for Steiner Forest an easy observation shows that the problem is APX-hard, even if the input graph contains no Steiner vertices. For Directed Steiner Tree we prove that approximating within any function of the studied parameter is W[1]-hard. Nevertheless, we show that an EPAS exists for Unweighted Directed Steiner Tree, but a PSAKS does not. We also prove that there is an EPAS and a PSAKS for Steiner Forest if in addition to the number of Steiner vertices, the number of connected components of an optimal solution is considered to be a parameter.Comment: 23 pages, 6 figures An extended abstract appeared in proceedings of STACS 201
    corecore