94 research outputs found

    Towards Work-Efficient Parallel Parameterized Algorithms

    Full text link
    Parallel parameterized complexity theory studies how fixed-parameter tractable (fpt) problems can be solved in parallel. Previous theoretical work focused on parallel algorithms that are very fast in principle, but did not take into account that when we only have a small number of processors (between 2 and, say, 1024), it is more important that the parallel algorithms are work-efficient. In the present paper we investigate how work-efficient fpt algorithms can be designed. We review standard methods from fpt theory, like kernelization, search trees, and interleaving, and prove trade-offs for them between work efficiency and runtime improvements. This results in a toolbox for developing work-efficient parallel fpt algorithms.Comment: Prior full version of the paper that will appear in Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), February 27 - March 02, 2019, Guwahati, India. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-10564-8_2

    Point Line Cover: The Easy Kernel is Essentially Tight

    Get PDF
    The input to the NP-hard Point Line Cover problem (PLC) consists of a set PP of nn points on the plane and a positive integer kk, and the question is whether there exists a set of at most kk lines which pass through all points in PP. A simple polynomial-time reduction reduces any input to one with at most k2k^2 points. We show that this is essentially tight under standard assumptions. More precisely, unless the polynomial hierarchy collapses to its third level, there is no polynomial-time algorithm that reduces every instance (P,k)(P,k) of PLC to an equivalent instance with O(k2ϵ)O(k^{2-\epsilon}) points, for any ϵ>0\epsilon>0. This answers, in the negative, an open problem posed by Lokshtanov (PhD Thesis, 2009). Our proof uses the machinery for deriving lower bounds on the size of kernels developed by Dell and van Melkebeek (STOC 2010). It has two main ingredients: We first show, by reduction from Vertex Cover, that PLC---conditionally---has no kernel of total size O(k2ϵ)O(k^{2-\epsilon}) bits. This does not directly imply the claimed lower bound on the number of points, since the best known polynomial-time encoding of a PLC instance with nn points requires ω(n2)\omega(n^{2}) bits. To get around this we build on work of Goodman et al. (STOC 1989) and devise an oracle communication protocol of cost O(nlogn)O(n\log n) for PLC; its main building block is a bound of O(nO(n))O(n^{O(n)}) for the order types of nn points that are not necessarily in general position, and an explicit algorithm that enumerates all possible order types of n points. This protocol and the lower bound on total size together yield the stated lower bound on the number of points. While a number of essentially tight polynomial lower bounds on total sizes of kernels are known, our result is---to the best of our knowledge---the first to show a nontrivial lower bound for structural/secondary parameters

    Paradigms for Parameterized Enumeration

    Full text link
    The aim of the paper is to examine the computational complexity and algorithmics of enumeration, the task to output all solutions of a given problem, from the point of view of parameterized complexity. First we define formally different notions of efficient enumeration in the context of parameterized complexity. Second we show how different algorithmic paradigms can be used in order to get parameter-efficient enumeration algorithms in a number of examples. These paradigms use well-known principles from the design of parameterized decision as well as enumeration techniques, like for instance kernelization and self-reducibility. The concept of kernelization, in particular, leads to a characterization of fixed-parameter tractable enumeration problems.Comment: Accepted for MFCS 2013; long version of the pape

    Vertex Cover Kernelization Revisited: Upper and Lower Bounds for a Refined Parameter

    Get PDF
    An important result in the study of polynomial-time preprocessing shows that there is an algorithm which given an instance (G,k) of Vertex Cover outputs an equivalent instance (G',k') in polynomial time with the guarantee that G' has at most 2k' vertices (and thus O((k')^2) edges) with k' <= k. Using the terminology of parameterized complexity we say that k-Vertex Cover has a kernel with 2k vertices. There is complexity-theoretic evidence that both 2k vertices and Theta(k^2) edges are optimal for the kernel size. In this paper we consider the Vertex Cover problem with a different parameter, the size fvs(G) of a minimum feedback vertex set for G. This refined parameter is structurally smaller than the parameter k associated to the vertex covering number vc(G) since fvs(G) <= vc(G) and the difference can be arbitrarily large. We give a kernel for Vertex Cover with a number of vertices that is cubic in fvs(G): an instance (G,X,k) of Vertex Cover, where X is a feedback vertex set for G, can be transformed in polynomial time into an equivalent instance (G',X',k') such that |V(G')| <= 2k and |V(G')| <= O(|X'|^3). A similar result holds when the feedback vertex set X is not given along with the input. In sharp contrast we show that the Weighted Vertex Cover problem does not have a polynomial kernel when parameterized by the cardinality of a given vertex cover of the graph unless NP is in coNP/poly and the polynomial hierarchy collapses to the third level.Comment: Published in "Theory of Computing Systems" as an Open Access publicatio

    Tight Kernel Bounds for Problems on Graphs with Small Degeneracy

    Full text link
    In this paper we consider kernelization for problems on d-degenerate graphs, i.e. graphs such that any subgraph contains a vertex of degree at most dd. This graph class generalizes many classes of graphs for which effective kernelization is known to exist, e.g. planar graphs, H-minor free graphs, and H-topological-minor free graphs. We show that for several natural problems on d-degenerate graphs the best known kernelization upper bounds are essentially tight.Comment: Full version of ESA 201

    On Structural Parameterizations of Hitting Set: Hitting Paths in Graphs Using 2-SAT

    Get PDF
    Hitting Set is a classic problem in combinatorial optimization. Its input consists of a set system F over a finite universe U and an integer t; the question is whether there is a set of t elements that intersects every set in F. The Hitting Set problem parameterized by the size of the solution is a well-known W[2]-complete problem in parameterized complexity theory. In this paper we investigate the complexity of Hitting Set under various structural parameterizations of the input. Our starting point is the folklore result that Hitting Set is polynomial-time solvable if there is a tree T on vertex set U such that the sets in F induce connected subtrees of T. We consider the case that there is a treelike graph with vertex set U such that the sets in F induce connected subgraphs; the parameter of the problem is a measure of how treelike the graph is. Our main positive result is an algorithm that, given a graph G with cyclomatic number k, a collection P of simple paths in G, and an integer t, determines in time 2^{5k} (|G| +|P|)^O(1) whether there is a vertex set of size t that hits all paths in P. It is based on a connection to the 2-SAT problem in multiple valued logic. For other parameterizations we derive W[1]-hardness and para-NP-completeness results.Comment: Presented at the 41st International Workshop on Graph-Theoretic Concepts in Computer Science, WG 2015. (The statement of Lemma 4 was corrected in this update.
    corecore