4,694 research outputs found

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I′,k′)(I',k') to the same problem, such that ∣I′∣+k′≤kO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c≥1c \geq 1, a cc-approximate solution s′s' to the pre-processed instance (I′,k′)(I',k') can be turned in polynomial time into a (c⋅α)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP⊆coNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α≥1\alpha \geq 1, unless NP⊆coNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    Bidimensionality and EPTAS

    Full text link
    Bidimensionality theory is a powerful framework for the development of metaalgorithmic techniques. It was introduced by Demaine et al. as a tool to obtain sub-exponential time parameterized algorithms for problems on H-minor free graphs. Demaine and Hajiaghayi extended the theory to obtain PTASs for bidimensional problems, and subsequently improved these results to EPTASs. Fomin et. al related the theory to the existence of linear kernels for parameterized problems. In this paper we revisit bidimensionality theory from the perspective of approximation algorithms and redesign the framework for obtaining EPTASs to be more powerful, easier to apply and easier to understand. Two of the most widely used approaches to obtain PTASs on planar graphs are the Lipton-Tarjan separator based approach, and Baker's approach. Demaine and Hajiaghayi strengthened both approaches using bidimensionality and obtained EPTASs for a multitude of problems. We unify the two strenghtened approaches to combine the best of both worlds. At the heart of our framework is a decomposition lemma which states that for "most" bidimensional problems, there is a polynomial time algorithm which given an H-minor-free graph G as input and an e > 0 outputs a vertex set X of size e * OPT such that the treewidth of G n X is f(e). Here, OPT is the objective function value of the problem in question and f is a function depending only on e. This allows us to obtain EPTASs on (apex)-minor-free graphs for all problems covered by the previous framework, as well as for a wide range of packing problems, partial covering problems and problems that are neither closed under taking minors, nor contractions. To the best of our knowledge for many of these problems including cycle packing, vertex-h-packing, maximum leaf spanning tree, and partial r-dominating set no EPTASs on planar graphs were previously known

    FPT approximations for packing and covering problems parameterized by elimination distance and even less

    Get PDF
    For numerous graph problems in the realm of parameterized algorithms, using the size of a smallest deletion set (called a modulator) into well-understood graph families as parameterization has led to a long and successful line of research. Recently, however, there has been an extensive study of structural parameters that are potentially much smaller than the modulator size. In particular, recent papers [Jansen et al. STOC 2021; Agrawal et al. SODA 2022] have studied parameterization by the size of the modulator to a graph family ℋ(mod_ℋ(⋅)), elimination distance to ℋ(ed_ℋ(⋅)), and ℋ-treewidth (tw_ℋ(⋅)). These parameters are related by the fact that tw_ℋ lower bounds ed_ℋ, which in turn lower bounds mod_ℋ. While these new parameters have been successfully exploited to design fast exact algorithms their utility (especially that of ed_ℋ and tw_ℋ) in the context of approximation algorithms is mostly unexplored. The conceptual contribution of this paper is to present novel algorithmic meta-theorems that expand the impact of these structural parameters to the area of FPT Approximation, mirroring their utility in the design of exact FPT algorithms. Precisely, we show that if a covering or packing problem is definable in Monadic Second Order Logic and has a property called Finite Integer Index (FII), then the existence of an FPT Approximation Scheme (FPT-AS, i.e., (1±ε)-approximation) parameterized by mod_ℋ(⋅), ed_ℋ(⋅), and tw_ℋ(⋅) is in fact equivalent. As a consequence, we obtain FPT-ASes for a wide range of covering, packing, and domination problems on graphs with respect to these parameters. In the process, we show that several graph problems, that are W[1]-hard parameterized by mod_ℋ, admit FPT-ASes not only when parameterized by mod_ℋ, but even when parameterized by the potentially much smaller parameter tw_ℋ(⋅). In the spirit of [Agrawal et al. SODA 2022], our algorithmic results highlight a broader connection between these parameters in the world of approximation. As concrete exemplifications of our meta-theorems, we obtain FPT-ASes for well-studied graph problems such as Vertex Cover, Feedback Vertex Set, Cycle Packing and Dominating Set, parameterized by tw_ℋ(⋅) (and hence, also by mod_ℋ(⋅) or ed_ℋ(⋅)), where ℋ is any family of minor free graphs

    Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack

    Get PDF
    The area of parameterized approximation seeks to combine approximation and parameterized algorithms to obtain, e.g., (1+epsilon)-approximations in f(k,epsilon)n^O(1) time where k is some parameter of the input. The goal is to overcome lower bounds from either of the areas. We obtain the following results on parameterized approximability: - In the maximum independent set of rectangles problem (MISR) we are given a collection of n axis parallel rectangles in the plane. Our goal is to select a maximum-cardinality subset of pairwise non-overlapping rectangles. This problem is NP-hard and also W[1]-hard [Marx, ESA\u2705]. The best-known polynomial-time approximation factor is O(log log n) [Chalermsook and Chuzhoy, SODA\u2709] and it admits a QPTAS [Adamaszek and Wiese, FOCS\u2713; Chuzhoy and Ene, FOCS\u2716]. Here we present a parameterized approximation scheme (PAS) for MISR, i.e. an algorithm that, for any given constant epsilon>0 and integer k>0, in time f(k,epsilon)n^g(epsilon), either outputs a solution of size at least k/(1+epsilon), or declares that the optimum solution has size less than k. - In the (2-dimensional) geometric knapsack problem (2DK) we are given an axis-aligned square knapsack and a collection of axis-aligned rectangles in the plane (items). Our goal is to translate a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of 2DK with rotations (2DKR), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factor is 2+epsilon [Jansen and Zhang, SODA\u2704]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese, SODA\u2715]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for 2DKR. For all considered problems, getting time f(k,epsilon)n^O(1), rather than f(k,epsilon)n^g(epsilon), would give FPT time f\u27(k)n^O(1) exact algorithms by setting epsilon=1/(k+1), contradicting W[1]-hardness. Instead, for each fixed epsilon>0, our PASs give (1+epsilon)-approximate solutions in FPT time. For both MISR and 2DKR our techniques also give rise to preprocessing algorithms that take n^g(epsilon) time and return a subset of at most k^g(epsilon) rectangles/items that contains a solution of size at least k/(1+epsilon) if a solution of size k exists. This is a special case of the recently introduced notion of a polynomial-size approximate kernelization scheme [Lokshtanov et al., STOC\u2717]

    Hitting and Harvesting Pumpkins

    Full text link
    The "c-pumpkin" is the graph with two vertices linked by c>0 parallel edges. A c-pumpkin-model in a graph G is a pair A,B of disjoint subsets of vertices of G, each inducing a connected subgraph of G, such that there are at least c edges in G between A and B. We focus on covering and packing c-pumpkin-models in a given graph: On the one hand, we provide an FPT algorithm running in time 2^O(k) n^O(1) deciding, for any fixed c>0, whether all c-pumpkin-models can be covered by at most k vertices. This generalizes known single-exponential FPT algorithms for Vertex Cover and Feedback Vertex Set, which correspond to the cases c=1,2 respectively. On the other hand, we present a O(log n)-approximation algorithm for both the problems of covering all c-pumpkin-models with a smallest number of vertices, and packing a maximum number of vertex-disjoint c-pumpkin-models.Comment: v2: several minor change

    Bidimensionality and Geometric Graphs

    Full text link
    In this paper we use several of the key ideas from Bidimensionality to give a new generic approach to design EPTASs and subexponential time parameterized algorithms for problems on classes of graphs which are not minor closed, but instead exhibit a geometric structure. In particular we present EPTASs and subexponential time parameterized algorithms for Feedback Vertex Set, Vertex Cover, Connected Vertex Cover, Diamond Hitting Set, on map graphs and unit disk graphs, and for Cycle Packing and Minimum-Vertex Feedback Edge Set on unit disk graphs. Our results are based on the recent decomposition theorems proved by Fomin et al [SODA 2011], and our algorithms work directly on the input graph. Thus it is not necessary to compute the geometric representations of the input graph. To the best of our knowledge, these results are previously unknown, with the exception of the EPTAS and a subexponential time parameterized algorithm on unit disk graphs for Vertex Cover, which were obtained by Marx [ESA 2005] and Alber and Fiala [J. Algorithms 2004], respectively. We proceed to show that our approach can not be extended in its full generality to more general classes of geometric graphs, such as intersection graphs of unit balls in R^d, d >= 3. Specifically we prove that Feedback Vertex Set on unit-ball graphs in R^3 neither admits PTASs unless P=NP, nor subexponential time algorithms unless the Exponential Time Hypothesis fails. Additionally, we show that the decomposition theorems which our approach is based on fail for disk graphs and that therefore any extension of our results to disk graphs would require new algorithmic ideas. On the other hand, we prove that our EPTASs and subexponential time algorithms for Vertex Cover and Connected Vertex Cover carry over both to disk graphs and to unit-ball graphs in R^d for every fixed d

    Data Reductions and Combinatorial Bounds for Improved Approximation Algorithms

    Full text link
    Kernelization algorithms in the context of Parameterized Complexity are often based on a combination of reduction rules and combinatorial insights. We will expose in this paper a similar strategy for obtaining polynomial-time approximation algorithms. Our method features the use of approximation-preserving reductions, akin to the notion of parameterized reductions. We exemplify this method to obtain the currently best approximation algorithms for \textsc{Harmless Set}, \textsc{Differential} and \textsc{Multiple Nonblocker}, all of them can be considered in the context of securing networks or information propagation
    • …
    corecore