2 research outputs found

    Optimally Repurposing Existing Algorithms to Obtain Exponential-Time Approximations

    Full text link
    The goal of this paper is to understand how exponential-time approximation algorithms can be obtained from existing polynomial-time approximation algorithms, existing parameterized exact algorithms, and existing parameterized approximation algorithms. More formally, we consider a monotone subset minimization problem over a universe of size nn (e.g., Vertex Cover or Feedback Vertex Set). We have access to an algorithm that finds an α\alpha-approximate solution in time ck⋅nO(1)c^k \cdot n^{O(1)} if a solution of size kk exists (and more generally, an extension algorithm that can approximate in a similar way if a set can be extended to a solution with kk further elements). Our goal is to obtain a dn⋅nO(1)d^n \cdot n^{O(1)} time β\beta-approximation algorithm for the problem with dd as small as possible. That is, for every fixed α,c,β≥1\alpha,c,\beta \geq 1, we would like to determine the smallest possible dd that can be achieved in a model where our problem-specific knowledge is limited to checking the feasibility of a solution and invoking the α\alpha-approximate extension algorithm. Our results completely resolve this question: (1) For every fixed α,c,β≥1\alpha,c,\beta \geq 1, a simple algorithm (``approximate monotone local search'') achieves the optimum value of dd. (2) Given α,c,β≥1\alpha,c,\beta \geq 1, we can efficiently compute the optimum dd up to any precision ε>0\varepsilon > 0. Earlier work presented algorithms (but no lower bounds) for the special case α=β=1\alpha = \beta = 1 [Fomin et al., J. ACM 2019] and for the special case α=β>1\alpha = \beta > 1 [Esmer et al., ESA 2022]. Our work generalizes these results and in particular confirms that the earlier algorithms are optimal in these special cases.Comment: 80 pages, 5 figure
    corecore