74 research outputs found

    FPT is Characterized by Useful Obstruction Sets

    Full text link
    Many graph problems were first shown to be fixed-parameter tractable using the results of Robertson and Seymour on graph minors. We show that the combination of finite, computable, obstruction sets and efficient order tests is not just one way of obtaining strongly uniform FPT algorithms, but that all of FPT may be captured in this way. Our new characterization of FPT has a strong connection to the theory of kernelization, as we prove that problems with polynomial kernels can be characterized by obstruction sets whose elements have polynomial size. Consequently we investigate the interplay between the sizes of problem kernels and the sizes of the elements of such obstruction sets, obtaining several examples of how results in one area yield new insights in the other. We show how exponential-size minor-minimal obstructions for pathwidth k form the crucial ingredient in a novel OR-cross-composition for k-Pathwidth, complementing the trivial AND-composition that is known for this problem. In the other direction, we show that OR-cross-compositions into a parameterized problem can be used to rule out the existence of efficiently generated quasi-orders on its instances that characterize the NO-instances by polynomial-size obstructions.Comment: Extended abstract with appendix, as accepted to WG 201

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≀ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≀ m/100, and L0 is the same as L f is the Majority function, and t ≀ m/100 f is the OR function, t ≀ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    Polynomial Kernels for Weighted Problems

    Full text link
    Kernelization is a formalization of efficient preprocessing for NP-hard problems using the framework of parameterized complexity. Among open problems in kernelization it has been asked many times whether there are deterministic polynomial kernelizations for Subset Sum and Knapsack when parameterized by the number nn of items. We answer both questions affirmatively by using an algorithm for compressing numbers due to Frank and Tardos (Combinatorica 1987). This result had been first used by Marx and V\'egh (ICALP 2013) in the context of kernelization. We further illustrate its applicability by giving polynomial kernels also for weighted versions of several well-studied parameterized problems. Furthermore, when parameterized by the different item sizes we obtain a polynomial kernelization for Subset Sum and an exponential kernelization for Knapsack. Finally, we also obtain kernelization results for polynomial integer programs

    An Approximate Kernel for Connected Feedback Vertex Set

    Get PDF
    The Feedback Vertex Set problem is a fundamental computational problem which has been the subject of intensive study in various domains of algorithmics. In this problem, one is given an undirected graph G and an integer k as input. The objective is to determine whether at most k vertices can be deleted from G such that the resulting graph is acyclic. The study of preprocessing algorithms for this problem has a long and rich history, culminating in the quadratic kernelization of Thomasse [SODA 2010]. However, it is known that when the solution is required to induce a connected subgraph (such a set is called a connected feedback vertex set), a polynomial kernelization is unlikely to exist and the problem is NP-hard to approximate below a factor of 2 (assuming the Unique Games Conjecture). In this paper, we show that if one is interested in only preserving approximate solutions (even of quality arbitrarily close to the optimum), then there is a drastic improvement in our ability to preprocess this problem. Specifically, we prove that for every fixed 0<epsilon<1, graph G, and k in N, the following holds: There is a polynomial time computable graph G\u27 of size k^O(1) such that for every c >= 1, any c-approximate connected feedback vertex set of G\u27 of size at most k is a c * (1+epsilon)-approximate connected feedback vertex set of G. Our result adds to the set of approximate kernelization algorithms introduced by Lokshtanov et al. [STOC 2017]. As a consequence of our main result, we show that Connected Feedback Vertex Set can be approximated within a factor min{OPT^O(1),n^(1-delta)} in polynomial time for some delta>0

    Lower Bounds for Conjunctive and Disjunctive Turing Kernels

    Get PDF

    Multidimensional Binary Vector Assignment problem: standard, structural and above guarantee parameterizations

    Full text link
    In this article we focus on the parameterized complexity of the Multidimensional Binary Vector Assignment problem (called \BVA). An input of this problem is defined by mm disjoint sets V1,V2,
,VmV^1, V^2, \dots, V^m, each composed of nn binary vectors of size pp. An output is a set of nn disjoint mm-tuples of vectors, where each mm-tuple is obtained by picking one vector from each set ViV^i. To each mm-tuple we associate a pp dimensional vector by applying the bit-wise AND operation on the mm vectors of the tuple. The objective is to minimize the total number of zeros in these nn vectors. mBVA can be seen as a variant of multidimensional matching where hyperedges are implicitly locally encoded via labels attached to vertices, but was originally introduced in the context of integrated circuit manufacturing. We provide for this problem FPT algorithms and negative results (ETHETH-based results, WW[2]-hardness and a kernel lower bound) according to several parameters: the standard parameter kk i.e. the total number of zeros), as well as two parameters above some guaranteed values.Comment: 16 pages, 6 figure

    On Sparsification for Computing Treewidth

    Full text link
    We investigate whether an n-vertex instance (G,k) of Treewidth, asking whether the graph G has treewidth at most k, can efficiently be made sparse without changing its answer. By giving a special form of OR-cross-composition, we prove that this is unlikely: if there is an e > 0 and a polynomial-time algorithm that reduces n-vertex Treewidth instances to equivalent instances, of an arbitrary problem, with O(n^{2-e}) bits, then NP is in coNP/poly and the polynomial hierarchy collapses to its third level. Our sparsification lower bound has implications for structural parameterizations of Treewidth: parameterizations by measures that do not exceed the vertex count, cannot have kernels with O(k^{2-e}) bits for any e > 0, unless NP is in coNP/poly. Motivated by the question of determining the optimal kernel size for Treewidth parameterized by vertex cover, we improve the O(k^3)-vertex kernel from Bodlaender et al. (STACS 2011) to a kernel with O(k^2) vertices. Our improved kernel is based on a novel form of treewidth-invariant set. We use the q-expansion lemma of Fomin et al. (STACS 2011) to find such sets efficiently in graphs whose vertex count is superquadratic in their vertex cover number.Comment: 21 pages. Full version of the extended abstract presented at IPEC 201

    Parameterized Complexity of Critical Node Cuts

    Get PDF
    We consider the following natural graph cut problem called Critical Node Cut (CNC): Given a graph GG on nn vertices, and two positive integers kk and xx, determine whether GG has a set of kk vertices whose removal leaves GG with at most xx connected pairs of vertices. We analyze this problem in the framework of parameterized complexity. That is, we are interested in whether or not this problem is solvable in f(Îș)⋅nO(1)f(\kappa) \cdot n^{O(1)} time (i.e., whether or not it is fixed-parameter tractable), for various natural parameters Îș\kappa. We consider four such parameters: - The size kk of the required cut. - The upper bound xx on the number of remaining connected pairs. - The lower bound yy on the number of connected pairs to be removed. - The treewidth ww of GG. We determine whether or not CNC is fixed-parameter tractable for each of these parameters. We determine this also for all possible aggregations of these four parameters, apart from w+kw+k. Moreover, we also determine whether or not CNC admits a polynomial kernel for all these parameterizations. That is, whether or not there is an algorithm that reduces each instance of CNC in polynomial time to an equivalent instance of size ÎșO(1)\kappa^{O(1)}, where Îș\kappa is the given parameter
    • 

    corecore