411 research outputs found

    A shortcut to (sun)flowers: Kernels in logarithmic space or linear time

    Full text link
    We investigate whether kernelization results can be obtained if we restrict kernelization algorithms to run in logarithmic space. This restriction for kernelization is motivated by the question of what results are attainable for preprocessing via simple and/or local reduction rules. We find kernelizations for d-Hitting Set(k), d-Set Packing(k), Edge Dominating Set(k) and a number of hitting and packing problems in graphs, each running in logspace. Additionally, we return to the question of linear-time kernelization. For d-Hitting Set(k) a linear-time kernelization was given by van Bevern [Algorithmica (2014)]. We give a simpler procedure and save a large constant factor in the size bound. Furthermore, we show that we can obtain a linear-time kernel for d-Set Packing(k) as well.Comment: 18 page

    Lossy Kernelization for (Implicit) Hitting Set Problems

    Get PDF
    We re-visit the complexity of polynomial time pre-processing (kernelization) for the d-Hitting Set problem. This is one of the most classic problems in Parameterized Complexity by itself, and, furthermore, it encompasses several other of the most well-studied problems in this field, such as Vertex Cover, Feedback Vertex Set in Tournaments (FVST) and Cluster Vertex Deletion (CVD). In fact, d-Hitting Set encompasses any deletion problem to a hereditary property that can be characterized by a finite set of forbidden induced subgraphs. With respect to bit size, the kernelization complexity of d-Hitting Set is essentially settled: there exists a kernel with ?(k^d) bits (?(k^d) sets and ?(k^{d-1}) elements) and this it tight by the result of Dell and van Melkebeek [STOC 2010, JACM 2014]. Still, the question of whether there exists a kernel for d-Hitting Set with fewer elements has remained one of the most major open problems in Kernelization. In this paper, we first show that if we allow the kernelization to be lossy with a qualitatively better loss than the best possible approximation ratio of polynomial time approximation algorithms, then one can obtain kernels where the number of elements is linear for every fixed d. Further, based on this, we present our main result: we show that there exist approximate Turing kernelizations for d-Hitting Set that even beat the established bit-size lower bounds for exact kernelizations - in fact, we use a constant number of oracle calls, each with "near linear" (?(k^{1+?})) bit size, that is, almost the best one could hope for. Lastly, for two special cases of implicit 3-Hitting set, namely, FVST and CVD, we obtain the "best of both worlds" type of results - (1+?)-approximate kernelizations with a linear number of vertices. In terms of size, this substantially improves the exact kernels of Fomin et al. [SODA 2018, TALG 2019], with simpler arguments

    Streaming Kernelization

    Full text link
    Kernelization is a formalization of preprocessing for combinatorially hard problems. We modify the standard definition for kernelization, which allows any polynomial-time algorithm for the preprocessing, by requiring instead that the preprocessing runs in a streaming setting and uses O(poly(k)logx)\mathcal{O}(poly(k)\log|x|) bits of memory on instances (x,k)(x,k). We obtain several results in this new setting, depending on the number of passes over the input that such a streaming kernelization is allowed to make. Edge Dominating Set turns out as an interesting example because it has no single-pass kernelization but two passes over the input suffice to match the bounds of the best standard kernelization

    Hitting forbidden minors: Approximation and Kernelization

    Get PDF
    We study a general class of problems called F-deletion problems. In an F-deletion problem, we are asked whether a subset of at most kk vertices can be deleted from a graph GG such that the resulting graph does not contain as a minor any graph from the family F of forbidden minors. We obtain a number of algorithmic results on the F-deletion problem when F contains a planar graph. We give (1) a linear vertex kernel on graphs excluding tt-claw K1,tK_{1,t}, the star with tt leves, as an induced subgraph, where tt is a fixed integer. (2) an approximation algorithm achieving an approximation ratio of O(log3/2OPT)O(\log^{3/2} OPT), where OPTOPT is the size of an optimal solution on general undirected graphs. Finally, we obtain polynomial kernels for the case when F contains graph θc\theta_c as a minor for a fixed integer cc. The graph θc\theta_c consists of two vertices connected by cc parallel edges. Even though this may appear to be a very restricted class of problems it already encompasses well-studied problems such as {\sc Vertex Cover}, {\sc Feedback Vertex Set} and Diamond Hitting Set. The generic kernelization algorithm is based on a non-trivial application of protrusion techniques, previously used only for problems on topological graph classes

    Hierarchies of Inefficient Kernelizability

    Full text link
    The framework of Bodlaender et al. (ICALP 2008) and Fortnow and Santhanam (STOC 2008) allows us to exclude the existence of polynomial kernels for a range of problems under reasonable complexity-theoretical assumptions. However, there are also some issues that are not addressed by this framework, including the existence of Turing kernels such as the "kernelization" of Leaf Out Branching(k) into a disjunction over n instances of size poly(k). Observing that Turing kernels are preserved by polynomial parametric transformations, we define a kernelization hardness hierarchy, akin to the M- and W-hierarchy of ordinary parameterized complexity, by the PPT-closure of problems that seem likely to be fundamentally hard for efficient Turing kernelization. We find that several previously considered problems are complete for our fundamental hardness class, including Min Ones d-SAT(k), Binary NDTM Halting(k), Connected Vertex Cover(k), and Clique(k log n), the clique problem parameterized by k log n

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I,k)(I',k') to the same problem, such that I+kkO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c1c \geq 1, a cc-approximate solution ss' to the pre-processed instance (I,k)(I',k') can be turned in polynomial time into a (cα)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NPcoNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α1\alpha \geq 1, unless NPcoNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    Meta-Kernelization using Well-Structured Modulators

    Get PDF
    Kernelization investigates exact preprocessing algorithms with performance guarantees. The most prevalent type of parameters used in kernelization is the solution size for optimization problems; however, also structural parameters have been successfully used to obtain polynomial kernels for a wide range of problems. Many of these parameters can be defined as the size of a smallest modulator of the given graph into a fixed graph class (i.e., a set of vertices whose deletion puts the graph into the graph class). Such parameters admit the construction of polynomial kernels even when the solution size is large or not applicable. This work follows up on the research on meta-kernelization frameworks in terms of structural parameters. We develop a class of parameters which are based on a more general view on modulators: instead of size, the parameters employ a combination of rank-width and split decompositions to measure structure inside the modulator. This allows us to lift kernelization results from modulator-size to more general parameters, hence providing smaller kernels. We show (i) how such large but well-structured modulators can be efficiently approximated, (ii) how they can be used to obtain polynomial kernels for any graph problem expressible in Monadic Second Order logic, and (iii) how they allow the extension of previous results in the area of structural meta-kernelization

    An Approximate Kernel for Connected Feedback Vertex Set

    Get PDF
    The Feedback Vertex Set problem is a fundamental computational problem which has been the subject of intensive study in various domains of algorithmics. In this problem, one is given an undirected graph G and an integer k as input. The objective is to determine whether at most k vertices can be deleted from G such that the resulting graph is acyclic. The study of preprocessing algorithms for this problem has a long and rich history, culminating in the quadratic kernelization of Thomasse [SODA 2010]. However, it is known that when the solution is required to induce a connected subgraph (such a set is called a connected feedback vertex set), a polynomial kernelization is unlikely to exist and the problem is NP-hard to approximate below a factor of 2 (assuming the Unique Games Conjecture). In this paper, we show that if one is interested in only preserving approximate solutions (even of quality arbitrarily close to the optimum), then there is a drastic improvement in our ability to preprocess this problem. Specifically, we prove that for every fixed 0<epsilon<1, graph G, and k in N, the following holds: There is a polynomial time computable graph G\u27 of size k^O(1) such that for every c >= 1, any c-approximate connected feedback vertex set of G\u27 of size at most k is a c * (1+epsilon)-approximate connected feedback vertex set of G. Our result adds to the set of approximate kernelization algorithms introduced by Lokshtanov et al. [STOC 2017]. As a consequence of our main result, we show that Connected Feedback Vertex Set can be approximated within a factor min{OPT^O(1),n^(1-delta)} in polynomial time for some delta>0

    Exploiting c\mathbf{c}-Closure in Kernelization Algorithms for Graph Problems

    Full text link
    A graph is c-closed if every pair of vertices with at least c common neighbors is adjacent. The c-closure of a graph G is the smallest number such that G is c-closed. Fox et al. [ICALP '18] defined c-closure and investigated it in the context of clique enumeration. We show that c-closure can be applied in kernelization algorithms for several classic graph problems. We show that Dominating Set admits a kernel of size k^O(c), that Induced Matching admits a kernel with O(c^7*k^8) vertices, and that Irredundant Set admits a kernel with O(c^(5/2)*k^3) vertices. Our kernelization exploits the fact that c-closed graphs have polynomially-bounded Ramsey numbers, as we show
    corecore