10 research outputs found

    Subset feedback vertex set is fixed parameter tractable

    Full text link
    The classical Feedback Vertex Set problem asks, for a given undirected graph G and an integer k, to find a set of at most k vertices that hits all the cycles in the graph G. Feedback Vertex Set has attracted a large amount of research in the parameterized setting, and subsequent kernelization and fixed-parameter algorithms have been a rich source of ideas in the field. In this paper we consider a more general and difficult version of the problem, named Subset Feedback Vertex Set (SUBSET-FVS in short) where an instance comes additionally with a set S ? V of vertices, and we ask for a set of at most k vertices that hits all simple cycles passing through S. Because of its applications in circuit testing and genetic linkage analysis SUBSET-FVS was studied from the approximation algorithms perspective by Even et al. [SICOMP'00, SIDMA'00]. The question whether the SUBSET-FVS problem is fixed-parameter tractable was posed independently by Kawarabayashi and Saurabh in 2009. We answer this question affirmatively. We begin by showing that this problem is fixed-parameter tractable when parametrized by |S|. Next we present an algorithm which reduces the given instance to 2^k n^O(1) instances with the size of S bounded by O(k^3), using kernelization techniques such as the 2-Expansion Lemma, Menger's theorem and Gallai's theorem. These two facts allow us to give a 2^O(k log k) n^O(1) time algorithm solving the Subset Feedback Vertex Set problem, proving that it is indeed fixed-parameter tractable.Comment: full version of a paper presented at ICALP'1

    Detecting Feedback Vertex Sets of Size kk in O⋆(2.7k)O^\star(2.7^k) Time

    Full text link
    In the Feedback Vertex Set problem, one is given an undirected graph GG and an integer kk, and one needs to determine whether there exists a set of kk vertices that intersects all cycles of GG (a so-called feedback vertex set). Feedback Vertex Set is one of the most central problems in parameterized complexity: It served as an excellent test bed for many important algorithmic techniques in the field such as Iterative Compression~[Guo et al. (JCSS'06)], Randomized Branching~[Becker et al. (J. Artif. Intell. Res'00)] and Cut\&Count~[Cygan et al. (FOCS'11)]. In particular, there has been a long race for the smallest dependence f(k)f(k) in run times of the type O⋆(f(k))O^\star(f(k)), where the O⋆O^\star notation omits factors polynomial in nn. This race seemed to be run in 2011, when a randomized algorithm O⋆(3k)O^\star(3^k) time algorithm based on Cut\&Count was introduced. In this work, we show the contrary and give a O⋆(2.7k)O^\star(2.7^k) time randomized algorithm. Our algorithm combines all mentioned techniques with substantial new ideas: First, we show that, given a feedback vertex set of size kk of bounded average degree, a tree decomposition of width (1−Ω(1))k(1-\Omega(1))k can be found in polynomial time. Second, we give a randomized branching strategy inspired by the one from~[Becker et al. (J. Artif. Intell. Res'00)] to reduce to the aforementioned bounded average degree setting. Third, we obtain significant run time improvements by employing fast matrix multiplication.Comment: SODA 2020, 22 page

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I′,k′)(I',k') to the same problem, such that ∣I′∣+k′≤kO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c≥1c \geq 1, a cc-approximate solution s′s' to the pre-processed instance (I′,k′)(I',k') can be turned in polynomial time into a (c⋅α)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆coNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α≥1\alpha \geq 1, unless NP⊆coNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    35th Symposium on Theoretical Aspects of Computer Science: STACS 2018, February 28-March 3, 2018, Caen, France

    Get PDF
    corecore