11,796 research outputs found
A Simple Gap-Producing Reduction for the Parameterized Set Cover Problem
Given an n-vertex bipartite graph I=(S,U,E), the goal of set cover problem is to find a minimum sized subset of S such that every vertex in U is adjacent to some vertex of this subset. It is NP-hard to approximate set cover to within a (1-o(1))ln n factor [I. Dinur and D. Steurer, 2014]. If we use the size of the optimum solution k as the parameter, then it can be solved in n^{k+o(1)} time [Eisenbrand and Grandoni, 2004]. A natural question is: can we approximate set cover to within an o(ln n) factor in n^{k-epsilon} time?
In a recent breakthrough result[Karthik et al., 2018], Karthik, Laekhanukit and Manurangsi showed that assuming the Strong Exponential Time Hypothesis (SETH), for any computable function f, no f(k)* n^{k-epsilon}-time algorithm can approximate set cover to a factor below (log n)^{1/poly(k,e(epsilon))} for some function e.
This paper presents a simple gap-producing reduction which, given a set cover instance I=(S,U,E) and two integers k<h <=(1-o(1))sqrt[k]{log |S|/log log |S|}, outputs a new set cover instance I\u27=(S,U\u27,E\u27) with |U\u27|=|U|^{h^k}|S|^{O(1)} in |U|^{h^k}* |S|^{O(1)} time such that
- if I has a k-sized solution, then so does I\u27;
- if I has no k-sized solution, then every solution of I\u27 must contain at least h vertices.
Setting h=(1-o(1))sqrt[k]{log |S|/log log |S|}, we show that assuming SETH, for any computable function f, no f(k)* n^{k-epsilon}-time algorithm can distinguish between a set cover instance with k-sized solution and one whose minimum solution size is at least (1-o(1))* sqrt[k]((log n)/(log log n)). This improves the result in [Karthik et al., 2018]
Lossy Kernelization
In this paper we propose a new framework for analyzing the performance of
preprocessing algorithms. Our framework builds on the notion of kernelization
from parameterized complexity. However, as opposed to the original notion of
kernelization, our definitions combine well with approximation algorithms and
heuristics. The key new definition is that of a polynomial size
-approximate kernel. Loosely speaking, a polynomial size
-approximate kernel is a polynomial time pre-processing algorithm that
takes as input an instance to a parameterized problem, and outputs
another instance to the same problem, such that . Additionally, for every , a -approximate solution
to the pre-processed instance can be turned in polynomial time into a
-approximate solution to the original instance .
Our main technical contribution are -approximate kernels of
polynomial size for three problems, namely Connected Vertex Cover, Disjoint
Cycle Packing and Disjoint Factors. These problems are known not to admit any
polynomial size kernels unless . Our approximate
kernels simultaneously beat both the lower bounds on the (normal) kernel size,
and the hardness of approximation lower bounds for all three problems. On the
negative side we prove that Longest Path parameterized by the length of the
path and Set Cover parameterized by the universe size do not admit even an
-approximate kernel of polynomial size, for any , unless
. In order to prove this lower bound we need to combine
in a non-trivial way the techniques used for showing kernelization lower bounds
with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and
approximate kernel lower bounds for Set Cover and Hitting Set parameterized
by universe siz
On Directed Feedback Vertex Set parameterized by treewidth
We study the Directed Feedback Vertex Set problem parameterized by the
treewidth of the input graph. We prove that unless the Exponential Time
Hypothesis fails, the problem cannot be solved in time on general directed graphs, where is the treewidth of
the underlying undirected graph. This is matched by a dynamic programming
algorithm with running time .
On the other hand, we show that if the input digraph is planar, then the
running time can be improved to .Comment: 20
Complexity of Grundy coloring and its variants
The Grundy number of a graph is the maximum number of colors used by the
greedy coloring algorithm over all vertex orderings. In this paper, we study
the computational complexity of GRUNDY COLORING, the problem of determining
whether a given graph has Grundy number at least . We also study the
variants WEAK GRUNDY COLORING (where the coloring is not necessarily proper)
and CONNECTED GRUNDY COLORING (where at each step of the greedy coloring
algorithm, the subgraph induced by the colored vertices must be connected).
We show that GRUNDY COLORING can be solved in time and WEAK
GRUNDY COLORING in time on graphs of order . While GRUNDY
COLORING and WEAK GRUNDY COLORING are known to be solvable in time
for graphs of treewidth (where is the number of
colors), we prove that under the Exponential Time Hypothesis (ETH), they cannot
be solved in time . We also describe an
algorithm for WEAK GRUNDY COLORING, which is therefore
\fpt for the parameter . Moreover, under the ETH, we prove that such a
running time is essentially optimal (this lower bound also holds for GRUNDY
COLORING). Although we do not know whether GRUNDY COLORING is in \fpt, we
show that this is the case for graphs belonging to a number of standard graph
classes including chordal graphs, claw-free graphs, and graphs excluding a
fixed minor. We also describe a quasi-polynomial time algorithm for GRUNDY
COLORING and WEAK GRUNDY COLORING on apex-minor graphs. In stark contrast with
the two other problems, we show that CONNECTED GRUNDY COLORING is
\np-complete already for colors.Comment: 24 pages, 7 figures. This version contains some new results and
improvements. A short paper based on version v2 appeared in COCOON'1
A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms
Parameterization and approximation are two popular ways of coping with
NP-hard problems. More recently, the two have also been combined to derive many
interesting results. We survey developments in the area both from the
algorithmic and hardness perspectives, with emphasis on new techniques and
potential future research directions
Robust and Efficient Uncertainty Quantification and Validation of RFIC Isolation
Modern communication and identification products impose demanding constraints on reliability of components. Due to this statistical constraints more and more enter optimization formulations of electronic products. Yield constraints often require efficient sampling techniques to obtain uncertainty quantification also at the tails of the distributions. These sampling techniques should outperform standard Monte Carlo techniques, since these latter ones are normally not efficient enough to deal with tail probabilities. One such a technique, Importance Sampling, has successfully been applied to optimize Static Random Access Memories (SRAMs) while guaranteeing very small failure probabilities, even going beyond 6-sigma variations of parameters involved. Apart from this, emerging uncertainty quantifications techniques offer expansions of the solution that serve as a response surface facility when doing statistics and optimization. To efficiently derive the coefficients in the expansions one either has to solve a large number of problems or a huge combined problem. Here parameterized Model Order Reduction (MOR) techniques can be used to reduce the work load. To also reduce the amount of parameters we identify those that only affect the variance in a minor way. These parameters can simply be set to a fixed value. The remaining parameters can be viewed as dominant. Preservation of the variation also allows to make statements about the approximation accuracy obtained by the parameter-reduced problem. This is illustrated on an RLC circuit. Additionally, the MOR technique used should not affect the variance significantly. Finally we consider a methodology for reliable RFIC isolation using floor-plan modeling and isolation grounding. Simulations show good comparison with measurements
- …