5,417 research outputs found
Lossy Kernelization
In this paper we propose a new framework for analyzing the performance of
preprocessing algorithms. Our framework builds on the notion of kernelization
from parameterized complexity. However, as opposed to the original notion of
kernelization, our definitions combine well with approximation algorithms and
heuristics. The key new definition is that of a polynomial size
-approximate kernel. Loosely speaking, a polynomial size
-approximate kernel is a polynomial time pre-processing algorithm that
takes as input an instance to a parameterized problem, and outputs
another instance to the same problem, such that . Additionally, for every , a -approximate solution
to the pre-processed instance can be turned in polynomial time into a
-approximate solution to the original instance .
Our main technical contribution are -approximate kernels of
polynomial size for three problems, namely Connected Vertex Cover, Disjoint
Cycle Packing and Disjoint Factors. These problems are known not to admit any
polynomial size kernels unless . Our approximate
kernels simultaneously beat both the lower bounds on the (normal) kernel size,
and the hardness of approximation lower bounds for all three problems. On the
negative side we prove that Longest Path parameterized by the length of the
path and Set Cover parameterized by the universe size do not admit even an
-approximate kernel of polynomial size, for any , unless
. In order to prove this lower bound we need to combine
in a non-trivial way the techniques used for showing kernelization lower bounds
with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and
approximate kernel lower bounds for Set Cover and Hitting Set parameterized
by universe siz
Multidimensional Binary Vector Assignment problem: standard, structural and above guarantee parameterizations
In this article we focus on the parameterized complexity of the
Multidimensional Binary Vector Assignment problem (called \BVA). An input of
this problem is defined by disjoint sets , each
composed of binary vectors of size . An output is a set of disjoint
-tuples of vectors, where each -tuple is obtained by picking one vector
from each set . To each -tuple we associate a dimensional vector by
applying the bit-wise AND operation on the vectors of the tuple. The
objective is to minimize the total number of zeros in these vectors. mBVA
can be seen as a variant of multidimensional matching where hyperedges are
implicitly locally encoded via labels attached to vertices, but was originally
introduced in the context of integrated circuit manufacturing.
We provide for this problem FPT algorithms and negative results (-based
results, [2]-hardness and a kernel lower bound) according to several
parameters: the standard parameter i.e. the total number of zeros), as well
as two parameters above some guaranteed values.Comment: 16 pages, 6 figure
- …