6,003 research outputs found

    Linear-Time Kernelization for Feedback Vertex Set

    Get PDF
    In this paper, we give an algorithm that, given an undirected graph G of m edges and an integer k, computes a graph G\u27 and an integer k\u27 in O(k^4 m) time such that (1) the size of the graph G\u27 is O(k^2), (2) k\u27 leq k, and (3) G has a feedback vertex set of size at most k if and only if G\u27 has a feedback vertex set of size at most k\u27. This is the first linear-time polynomial-size kernel for Feedback Vertex Set. The size of our kernel is 2k^2+k vertices and 4k^2 edges, which is smaller than the previous best of 4k^2 vertices and 8k^2 edges. Thus, we improve the size and the running time simultaneously. We note that under the assumption of NP notsubseteq coNP/poly, Feedback Vertex Set does not admit an O(k^{2-epsilon})-size kernel for any epsilon>0. Our kernel exploits k-submodular relaxation, which is a recently developed technique for obtaining efficient FPT algorithms for various problems. The dual of k-submodular relaxation of Feedback Vertex Set can be seen as a half-integral variant of A-path packing, and to obtain the linear-time complexity, we give an efficient augmenting-path algorithm for this problem. We believe that this combinatorial algorithm is of independent interest. A solver based on the proposed method won first place in the 1st Parameterized Algorithms and Computational Experiments (PACE) challenge

    Fast Algorithms for Parameterized Problems with Relaxed Disjointness Constraints

    Full text link
    In parameterized complexity, it is a natural idea to consider different generalizations of classic problems. Usually, such generalization are obtained by introducing a "relaxation" variable, where the original problem corresponds to setting this variable to a constant value. For instance, the problem of packing sets of size at most pp into a given universe generalizes the Maximum Matching problem, which is recovered by taking p=2p=2. Most often, the complexity of the problem increases with the relaxation variable, but very recently Abasi et al. have given a surprising example of a problem --- rr-Simple kk-Path --- that can be solved by a randomized algorithm with running time O(2O(klogrr))O^*(2^{O(k \frac{\log r}{r})}). That is, the complexity of the problem decreases with rr. In this paper we pursue further the direction sketched by Abasi et al. Our main contribution is a derandomization tool that provides a deterministic counterpart of the main technical result of Abasi et al.: the O(2O(klogrr))O^*(2^{O(k \frac{\log r}{r})}) algorithm for (r,k)(r,k)-Monomial Detection, which is the problem of finding a monomial of total degree kk and individual degrees at most rr in a polynomial given as an arithmetic circuit. Our technique works for a large class of circuits, and in particular it can be used to derandomize the result of Abasi et al. for rr-Simple kk-Path. On our way to this result we introduce the notion of representative sets for multisets, which may be of independent interest. Finally, we give two more examples of problems that were already studied in the literature, where the same relaxation phenomenon happens. The first one is a natural relaxation of the Set Packing problem, where we allow the packed sets to overlap at each element at most rr times. The second one is Degree Bounded Spanning Tree, where we seek for a spanning tree of the graph with a small maximum degree

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I,k)(I',k') to the same problem, such that I+kkO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c1c \geq 1, a cc-approximate solution ss' to the pre-processed instance (I,k)(I',k') can be turned in polynomial time into a (cα)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NPcoNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α1\alpha \geq 1, unless NPcoNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    Polynomial Kernels for Weighted Problems

    Full text link
    Kernelization is a formalization of efficient preprocessing for NP-hard problems using the framework of parameterized complexity. Among open problems in kernelization it has been asked many times whether there are deterministic polynomial kernelizations for Subset Sum and Knapsack when parameterized by the number nn of items. We answer both questions affirmatively by using an algorithm for compressing numbers due to Frank and Tardos (Combinatorica 1987). This result had been first used by Marx and V\'egh (ICALP 2013) in the context of kernelization. We further illustrate its applicability by giving polynomial kernels also for weighted versions of several well-studied parameterized problems. Furthermore, when parameterized by the different item sizes we obtain a polynomial kernelization for Subset Sum and an exponential kernelization for Knapsack. Finally, we also obtain kernelization results for polynomial integer programs

    Dynamic Programming for Graphs on Surfaces

    Get PDF
    We provide a framework for the design and analysis of dynamic programming algorithms for surface-embedded graphs on n vertices and branchwidth at most k. Our technique applies to general families of problems where standard dynamic programming runs in 2^{O(k log k)} n steps. Our approach combines tools from topological graph theory and analytic combinatorics. In particular, we introduce a new type of branch decomposition called "surface cut decomposition", generalizing sphere cut decompositions of planar graphs introduced by Seymour and Thomas, which has nice combinatorial properties. Namely, the number of partial solutions that can be arranged on a surface cut decomposition can be upper-bounded by the number of non-crossing partitions on surfaces with boundary. It follows that partial solutions can be represented by a single-exponential (in the branchwidth k) number of configurations. This proves that, when applied on surface cut decompositions, dynamic programming runs in 2^{O(k)} n steps. That way, we considerably extend the class of problems that can be solved in running times with a single-exponential dependence on branchwidth and unify/improve most previous results in this direction.Comment: 28 pages, 3 figure

    Dynamic programming for graphs on surfaces

    Get PDF
    We provide a framework for the design and analysis of dynamic programming algorithms for surface-embedded graphs on n vertices and branchwidth at most k. Our technique applies to general families of problems where standard dynamic programming runs in 2O(k·log k). Our approach combines tools from topological graph theory and analytic combinatorics.Postprint (updated version

    Parameterization Above a Multiplicative Guarantee

    Get PDF
    Parameterization above a guarantee is a successful paradigm in Parameterized Complexity. To the best of our knowledge, all fixed-parameter tractable problems in this paradigm share an additive form defined as follows. Given an instance (I,k) of some (parameterized) problem ? with a guarantee g(I), decide whether I admits a solution of size at least (at most) k+g(I). Here, g(I) is usually a lower bound (resp. upper bound) on the maximum (resp. minimum) size of a solution. Since its introduction in 1999 for Max SAT and Max Cut (with g(I) being half the number of clauses and half the number of edges, respectively, in the input), analysis of parameterization above a guarantee has become a very active and fruitful topic of research. We highlight a multiplicative form of parameterization above a guarantee: Given an instance (I,k) of some (parameterized) problem ? with a guarantee g(I), decide whether I admits a solution of size at least (resp. at most) k ? g(I). In particular, we study the Long Cycle problem with a multiplicative parameterization above the girth g(I) of the input graph, and provide a parameterized algorithm for this problem. Apart from being of independent interest, this exemplifies how parameterization above a multiplicative guarantee can arise naturally. We also show that, for any fixed constant ?>0, multiplicative parameterization above g(I)^(1+?) of Long Cycle yields para-NP-hardness, thus our parameterization is tight in this sense. We complement our main result with the design (or refutation of the existence) of algorithms for other problems parameterized multiplicatively above girth

    Towards Work-Efficient Parallel Parameterized Algorithms

    Full text link
    Parallel parameterized complexity theory studies how fixed-parameter tractable (fpt) problems can be solved in parallel. Previous theoretical work focused on parallel algorithms that are very fast in principle, but did not take into account that when we only have a small number of processors (between 2 and, say, 1024), it is more important that the parallel algorithms are work-efficient. In the present paper we investigate how work-efficient fpt algorithms can be designed. We review standard methods from fpt theory, like kernelization, search trees, and interleaving, and prove trade-offs for them between work efficiency and runtime improvements. This results in a toolbox for developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-10564-8_2
    corecore