2 research outputs found
Optimally Repurposing Existing Algorithms to Obtain Exponential-Time Approximations
The goal of this paper is to understand how exponential-time approximation
algorithms can be obtained from existing polynomial-time approximation
algorithms, existing parameterized exact algorithms, and existing parameterized
approximation algorithms. More formally, we consider a monotone subset
minimization problem over a universe of size (e.g., Vertex Cover or
Feedback Vertex Set). We have access to an algorithm that finds an
-approximate solution in time if a solution of
size exists (and more generally, an extension algorithm that can
approximate in a similar way if a set can be extended to a solution with
further elements). Our goal is to obtain a time
-approximation algorithm for the problem with as small as possible.
That is, for every fixed , we would like to determine
the smallest possible that can be achieved in a model where our
problem-specific knowledge is limited to checking the feasibility of a solution
and invoking the -approximate extension algorithm. Our results
completely resolve this question:
(1) For every fixed , a simple algorithm
(``approximate monotone local search'') achieves the optimum value of .
(2) Given , we can efficiently compute the optimum
up to any precision .
Earlier work presented algorithms (but no lower bounds) for the special case
[Fomin et al., J. ACM 2019] and for the special case
[Esmer et al., ESA 2022]. Our work generalizes these
results and in particular confirms that the earlier algorithms are optimal in
these special cases.Comment: 80 pages, 5 figure