4,619 research outputs found

    Tight lower bounds for certain parameterized NP-hard problems

    Get PDF
    Based on the framework of parameterized complexity theory, we derive tight lower bounds on the computational complexity for a number of well-known NP-hard problems. We start by proving a general result, namely that the parameterized weighted satisfiability problem on depth-t circuits cannot be solved in time no(k) poly(m), where n is the circuit input length, m is the circuit size, and k is the parameter, unless the (t − 1)-st level W [t − 1] of the W-hierarchy collapses to FPT. By refining this technique, we prove that a group of parameterized NP-hard problems, including weighted sat, dominating set, hitting set, set cover, and feature set, cannot be solved in time no(k) poly(m), where n is the size of the universal set from which the k elements are to be selected and m is the instance size, unless the first level W [1] of the W-hierarchy collapses to FPT. We also prove that another group of parameterized problems which includes weighted q-sat (for any fixed q ≥ 2), clique, and independent set, cannot be solved in time no(k) unless all search problems in the syntactic class SNP, introduced by Papadimitriou and Yannakakis, are solvable in subexponential time. Note that all these parameterized problems have trivial algorithms of running time either n k poly(m) or O(n k).

    Parameterized algorithms and computational lower bounds: a structural approach

    Get PDF
    Many problems of practical significance are known to be NP-hard, and hence, are unlikely to be solved by polynomial-time algorithms. There are several ways to cope with the NP-hardness of a certain problem. The most popular approaches include heuristic algorithms, approximation algorithms, and randomized algorithms. Recently, parameterized computation and complexity have been receiving a lot of attention. By taking advantage of small or moderate parameter values, parameterized algorithms provide new venues for practically solving problems that are theoretically intractable. In this dissertation, we design efficient parameterized algorithms for several wellknown NP-hard problems and prove strong lower bounds for some others. In doing so, we place emphasis on the development of new techniques that take advantage of the structural properties of the problems. We present a simple parameterized algorithm for Vertex Cover that uses polynomial space and runs in time O(1.2738k + kn). It improves both the previous O(1.286k + kn)-time polynomial-space algorithm by Chen, Kanj, and Jia, and the very recent O(1.2745kk4 + kn)-time exponential-space algorithm, by Chandran and Grandoni. This algorithm stands out for both its performance and its simplicity. Essential to the design of this algorithm are several new techniques that use structural information of the underlying graph to bound the search space. For Vertex Cover on graphs with degree bounded by three, we present a still better algorithm that runs in time O(1.194k + kn), based on an âÂÂalmost-globalâ analysis of the search tree. We also show that an important structural property of the underlying graphs â the graph genus â largely dictates the computational complexity of some important graph problems including Vertex Cover, Independent Set and Dominating Set. We present a set of new techniques that allows us to prove almost tight computational lower bounds for some NP-hard problems, such as Clique, Dominating Set, Hitting Set, Set Cover, and Independent Set. The techniques are further extended to derive computational lower bounds on polynomial time approximation schemes for certain NP-hard problems. Our results illustrate a new approach to proving strong computational lower bounds for some NP-hard problems under reasonable conditions

    Point Line Cover: The Easy Kernel is Essentially Tight

    Get PDF
    The input to the NP-hard Point Line Cover problem (PLC) consists of a set PP of nn points on the plane and a positive integer kk, and the question is whether there exists a set of at most kk lines which pass through all points in PP. A simple polynomial-time reduction reduces any input to one with at most k2k^2 points. We show that this is essentially tight under standard assumptions. More precisely, unless the polynomial hierarchy collapses to its third level, there is no polynomial-time algorithm that reduces every instance (P,k)(P,k) of PLC to an equivalent instance with O(k2ϵ)O(k^{2-\epsilon}) points, for any ϵ>0\epsilon>0. This answers, in the negative, an open problem posed by Lokshtanov (PhD Thesis, 2009). Our proof uses the machinery for deriving lower bounds on the size of kernels developed by Dell and van Melkebeek (STOC 2010). It has two main ingredients: We first show, by reduction from Vertex Cover, that PLC---conditionally---has no kernel of total size O(k2ϵ)O(k^{2-\epsilon}) bits. This does not directly imply the claimed lower bound on the number of points, since the best known polynomial-time encoding of a PLC instance with nn points requires ω(n2)\omega(n^{2}) bits. To get around this we build on work of Goodman et al. (STOC 1989) and devise an oracle communication protocol of cost O(nlogn)O(n\log n) for PLC; its main building block is a bound of O(nO(n))O(n^{O(n)}) for the order types of nn points that are not necessarily in general position, and an explicit algorithm that enumerates all possible order types of n points. This protocol and the lower bound on total size together yield the stated lower bound on the number of points. While a number of essentially tight polynomial lower bounds on total sizes of kernels are known, our result is---to the best of our knowledge---the first to show a nontrivial lower bound for structural/secondary parameters

    Kernels for Below-Upper-Bound Parameterizations of the Hitting Set and Directed Dominating Set Problems

    Get PDF
    In the {\sc Hitting Set} problem, we are given a collection F\cal F of subsets of a ground set VV and an integer pp, and asked whether VV has a pp-element subset that intersects each set in F\cal F. We consider two parameterizations of {\sc Hitting Set} below tight upper bounds: p=mkp=m-k and p=nkp=n-k. In both cases kk is the parameter. We prove that the first parameterization is fixed-parameter tractable, but has no polynomial kernel unless coNP\subseteqNP/poly. The second parameterization is W[1]-complete, but the introduction of an additional parameter, the degeneracy of the hypergraph H=(V,F)H=(V,{\cal F}), makes the problem not only fixed-parameter tractable, but also one with a linear kernel. Here the degeneracy of H=(V,F)H=(V,{\cal F}) is the minimum integer dd such that for each XVX\subset V the hypergraph with vertex set VXV\setminus X and edge set containing all edges of F\cal F without vertices in XX, has a vertex of degree at most d.d. In {\sc Nonblocker} ({\sc Directed Nonblocker}), we are given an undirected graph (a directed graph) GG on nn vertices and an integer kk, and asked whether GG has a set XX of nkn-k vertices such that for each vertex y∉Xy\not\in X there is an edge (arc) from a vertex in XX to yy. {\sc Nonblocker} can be viewed as a special case of {\sc Directed Nonblocker} (replace an undirected graph by a symmetric digraph). Dehne et al. (Proc. SOFSEM 2006) proved that {\sc Nonblocker} has a linear-order kernel. We obtain a linear-order kernel for {\sc Directed Nonblocker}

    Counting Complexity for Reasoning in Abstract Argumentation

    Full text link
    In this paper, we consider counting and projected model counting of extensions in abstract argumentation for various semantics. When asking for projected counts we are interested in counting the number of extensions of a given argumentation framework while multiple extensions that are identical when restricted to the projected arguments count as only one projected extension. We establish classical complexity results and parameterized complexity results when the problems are parameterized by treewidth of the undirected argumentation graph. To obtain upper bounds for counting projected extensions, we introduce novel algorithms that exploit small treewidth of the undirected argumentation graph of the input instance by dynamic programming (DP). Our algorithms run in time double or triple exponential in the treewidth depending on the considered semantics. Finally, we take the exponential time hypothesis (ETH) into account and establish lower bounds of bounded treewidth algorithms for counting extensions and projected extension.Comment: Extended version of a paper published at AAAI-1
    corecore