45 research outputs found

    Lossy Kernels for Hitting Subgraphs

    Get PDF
    In this paper, we study the Connected H-hitting Set and Dominating Set problems from the perspective of approximate kernelization, a framework recently introduced by Lokshtanov et al. [STOC 2017]. For the Connected H-hitting set problem, we obtain an alpha-approximate kernel for every alpha>1 and complement it with a lower bound for the natural weighted version. We then perform a refined analysis of the tradeoff between the approximation factor and kernel size for the Dominating Set problem on d-degenerate graphs and provide an interpolation of approximate kernels between the known d^2-approximate kernel of constant size and 1-approximate kernel of size k^{O(d^2)}

    Lossy Kernelization for (Implicit) Hitting Set Problems

    Get PDF
    We re-visit the complexity of polynomial time pre-processing (kernelization) for the d-Hitting Set problem. This is one of the most classic problems in Parameterized Complexity by itself, and, furthermore, it encompasses several other of the most well-studied problems in this field, such as Vertex Cover, Feedback Vertex Set in Tournaments (FVST) and Cluster Vertex Deletion (CVD). In fact, d-Hitting Set encompasses any deletion problem to a hereditary property that can be characterized by a finite set of forbidden induced subgraphs. With respect to bit size, the kernelization complexity of d-Hitting Set is essentially settled: there exists a kernel with ?(k^d) bits (?(k^d) sets and ?(k^{d-1}) elements) and this it tight by the result of Dell and van Melkebeek [STOC 2010, JACM 2014]. Still, the question of whether there exists a kernel for d-Hitting Set with fewer elements has remained one of the most major open problems in Kernelization. In this paper, we first show that if we allow the kernelization to be lossy with a qualitatively better loss than the best possible approximation ratio of polynomial time approximation algorithms, then one can obtain kernels where the number of elements is linear for every fixed d. Further, based on this, we present our main result: we show that there exist approximate Turing kernelizations for d-Hitting Set that even beat the established bit-size lower bounds for exact kernelizations - in fact, we use a constant number of oracle calls, each with "near linear" (?(k^{1+?})) bit size, that is, almost the best one could hope for. Lastly, for two special cases of implicit 3-Hitting set, namely, FVST and CVD, we obtain the "best of both worlds" type of results - (1+?)-approximate kernelizations with a linear number of vertices. In terms of size, this substantially improves the exact kernels of Fomin et al. [SODA 2018, TALG 2019], with simpler arguments

    Approximation and Compression Techniques to Enhance Performance of Graphics Processing Units

    Get PDF
    A key challenge in modern computing systems is to access data fast enough to fully utilize the computing elements in the chip. In Graphics Processing Units (GPUs), the performance is often constrained by register file size, memory bandwidth, and the capacity of the main memory. One important technique towards alleviating this challenge is data compression. By reducing the amount of data that needs to be communicated or stored, memory resources crucial for performance can be efficiently utilized.This thesis provides a set of approximation and compression techniques for GPUs, with the goal of efficiently utilizing the computational fabric, and thereby increase performance. The thesis shows that these techniques can substantially lower the amount of information the system has to process, and are thus important tools in the process of meeting challenges in memory utilization.This thesis makes contributions within three areas: controlled floating-point precision reduction, lossless and lossy memory compression, and distributed training of neural networks. In the first area, the thesis shows that through automated and controlled floating-point approximation, the register file can be more efficiently utilized. This is achieved through a framework which establishes a cross-layer connection between the application and the microarchitecture layer, and a novel register file organization capable of leveraging low-precision floating-point values and narrow integers for increased capacity and performance.Within the area of compression, this thesis aims at increasing the effective bandwidth of GPUs by presenting a lossless and lossy memory compression algorithm to reduce the amount of transferred data. In contrast to state-of-the-art compression techniques such as Base-Delta-Immediate and Bitplane Compression, which uses intra-block bases for compression, the proposed algorithm leverages multiple global base values to reach a higher compression ratio. The algorithm includes an optional approximation step for floating-point values which offers higher compression ratio at a given, low, error rate.Finally, within the area of distributed training of neural networks, this thesis proposes a subgraph approximation scheme for graph data which mitigates accuracy loss in a distributed setting. The scheme allows neural network models that use graphs as inputs to converge at single-machine accuracy, while minimizing synchronization overhead between the machines

    Lossy Kernelization

    Get PDF
    In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions combine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α\alpha-approximate kernel. Loosely speaking, a polynomial size α\alpha-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I,k)(I,k) to a parameterized problem, and outputs another instance (I,k)(I',k') to the same problem, such that I+kkO(1)|I'|+k' \leq k^{O(1)}. Additionally, for every c1c \geq 1, a cc-approximate solution ss' to the pre-processed instance (I,k)(I',k') can be turned in polynomial time into a (cα)(c \cdot \alpha)-approximate solution ss to the original instance (I,k)(I,k). Our main technical contribution are α\alpha-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NPcoNP/polyNP \subseteq coNP/poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α\alpha-approximate kernel of polynomial size, for any α1\alpha \geq 1, unless NPcoNP/polyNP \subseteq coNP/poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and approximate kernel lower bounds for Set Cover and Hitting Set parameterized by universe siz

    Approximate Turing Kernelization for Problems Parameterized by Treewidth

    Get PDF
    We extend the notion of lossy kernelization, introduced by Lokshtanov et al. [STOC 2017], to approximate Turing kernelization. An α\alpha-approximate Turing kernel for a parameterized optimization problem is a polynomial-time algorithm that, when given access to an oracle that outputs cc-approximate solutions in O(1)O(1) time, obtains an (αc)(\alpha \cdot c)-approximate solution to the considered problem, using calls to the oracle of size at most f(k)f(k) for some function ff that only depends on the parameter. Using this definition, we show that Independent Set parameterized by treewidth \ell has a (1+ε)(1+\varepsilon)-approximate Turing kernel with O(2ε)O(\frac{\ell^2}{\varepsilon}) vertices, answering an open question posed by Lokshtanov et al. [STOC 2017]. Furthermore, we give (1+ε)(1+\varepsilon)-approximate Turing kernels for the following graph problems parameterized by treewidth: Vertex Cover, Edge Clique Cover, Edge-Disjoint Triangle Packing and Connected Vertex Cover. We generalize the result for Independent Set and Vertex Cover, by showing that all graph problems that we will call "friendly" admit (1+ε)(1+\varepsilon)-approximate Turing kernels of polynomial size when parameterized by treewidth. We use this to obtain approximate Turing kernels for Vertex-Disjoint HH-packing for connected graphs HH, Clique Cover, Feedback Vertex Set and Edge Dominating Set

    Hitting Subgraphs in Sparse Graphs and Geometric Intersection Graphs

    Full text link
    We investigate a fundamental vertex-deletion problem called (Induced) Subgraph Hitting: given a graph GG and a set F\mathcal{F} of forbidden graphs, the goal is to compute a minimum-sized set SS of vertices of GG such that GSG-S does not contain any graph in F\mathcal{F} as an (induced) subgraph. This is a generic problem that encompasses many well-known problems that were extensively studied on their own, particularly (but not only) from the perspectives of both approximation and parameterization. We focus on the design of efficient approximation schemes, i.e., with running time f(ε,F)nO(1)f(\varepsilon,\mathcal{F}) \cdot n^{O(1)}, which are also of significant interest to both communities. Technically, our main contribution is a linear-time approximation-preserving reduction from (Induced) Subgraph Hitting on any graph class G\mathcal{G} of bounded expansion to the same problem on bounded degree graphs within G\mathcal{G}. This yields a novel algorithmic technique to design (efficient) approximation schemes for the problem on very broad graph classes, well beyond the state-of-the-art. Specifically, applying this reduction, we derive approximation schemes with (almost) linear running time for the problem on any graph classes that have strongly sublinear separators and many important classes of geometric intersection graphs (such as fat-object graphs, pseudo-disk graphs, etc.). Our proofs introduce novel concepts and combinatorial observations that may be of independent interest (and, which we believe, will find other uses) for studies of approximation algorithms, parameterized complexity, sparse graph classes, and geometric intersection graphs. As a byproduct, we also obtain the first robust algorithm for kk-Subgraph Isomorphism on intersection graphs of fat objects and pseudo-disks, with running time f(k)nlogn+O(m)f(k) \cdot n \log n + O(m).Comment: 60 pages, abstract shortened to fulfill the length limi

    On Approximate Compressions for Connected Minor-Hitting Sets

    Get PDF
    In the Connected ?-Deletion problem, ? is a fixed finite family of graphs and the objective is to compute a minimum set of vertices (or a vertex set of size at most k for some given k) such that (a) this set induces a connected subgraph of the given graph and (b) deleting this set results in a graph which excludes every F ? ? as a minor. In the area of kernelization, this problem is well known to exclude a polynomial kernel subject to standard complexity hypotheses even in very special cases such as ? = K?, i.e., Connected Vertex Cover. In this work, we give a (2+?)-approximate polynomial compression for the Connected ?-Deletion problem when ? contains at least one planar graph. This is the first approximate polynomial compression result for this generic problem. As a corollary, we obtain the first approximate polynomial compression result for the special case of Connected ?-Treewidth Deletion

    A New Framework for Kernelization Lower Bounds: The Case of Maximum Minimal Vertex Cover

    Get PDF
    In the Maximum Minimal Vertex Cover (MMVC) problem, we are given a graph G and a positive integer k, and the objective is to decide whether G contains a minimal vertex cover of size at least k. Motivated by the kernelization of MMVC with parameter k, our main contribution is to introduce a simple general framework to obtain lower bounds on the degrees of a certain type of polynomial kernels for vertex-optimization problems, which we call {lop-kernels}. Informally, this type of kernels is required to preserve large optimal solutions in the reduced instance, and captures the vast majority of existing kernels in the literature. As a consequence of this framework, we show that the trivial quadratic kernel for MMVC is essentially optimal, answering a question of Boria et al. [Discret. Appl. Math. 2015], and that the known cubic kernel for Maximum Minimal Feedback Vertex Set is also essentially optimal. On the positive side, given the (plausible) non-existence of subquadratic kernels for MMVC on general graphs, we provide subquadratic kernels on H-free graphs for several graphs H, such as the bull, the paw, or the complete graphs, by making use of the Erd?s-Hajnal property in order to find an appropriate decomposition. Finally, we prove that MMVC does not admit polynomial kernels parameterized by the size of a minimum vertex cover of the input graph, even on bipartite graphs, unless NP ? coNP / poly. This indicates that parameters smaller than the solution size are unlike to yield polynomial kernels for MMVC
    corecore