5,417 research outputs found

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I,k)(I',k') to the same problem, such that I+kkO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c1c \geq 1, a cc-approximate solution ss' to the pre-processed instance (I,k)(I',k') can be turned in polynomial time into a (cα)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NPcoNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α1\alpha \geq 1, unless NPcoNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    Multidimensional Binary Vector Assignment problem: standard, structural and above guarantee parameterizations

    Full text link
    In this article we focus on the parameterized complexity of the Multidimensional Binary Vector Assignment problem (called \BVA). An input of this problem is defined by mm disjoint sets V1,V2,,VmV^1, V^2, \dots, V^m, each composed of nn binary vectors of size pp. An output is a set of nn disjoint mm-tuples of vectors, where each mm-tuple is obtained by picking one vector from each set ViV^i. To each mm-tuple we associate a pp dimensional vector by applying the bit-wise AND operation on the mm vectors of the tuple. The objective is to minimize the total number of zeros in these nn vectors. mBVA can be seen as a variant of multidimensional matching where hyperedges are implicitly locally encoded via labels attached to vertices, but was originally introduced in the context of integrated circuit manufacturing. We provide for this problem FPT algorithms and negative results (ETHETH-based results, WW[2]-hardness and a kernel lower bound) according to several parameters: the standard parameter kk i.e. the total number of zeros), as well as two parameters above some guaranteed values.Comment: 16 pages, 6 figure
    corecore