24 research outputs found

    Characterizing the easy-to-find subgraphs from the viewpoint of polynomial-time algorithms, kernels, and Turing kernels

    Get PDF
    We study two fundamental problems related to finding subgraphs: (1) given graphs G and H, Subgraph Test asks if H is isomorphic to a subgraph of G, (2) given graphs G, H, and an integer t, PACKING asks if G contains t vertex-disjoint subgraphs isomorphic to H. For every graph class F, let F-Subgraph Test and F-Packing be the special cases of the two problems where H is restricted to be in F. Our goal is to study which classes F make the two problems tractable in one of the following senses: - (randomized) polynomial-time solvable, - admits a polynomial (many-one) kernel (that is, has a polynomial-time preprocessing procedure that creates an equivalent instance whose size is polynomially bounded by the size of the solution), or - admits a polynomial Turing kernel (that is, has an adaptive polynomial-time procedure that reduces the problem to a polynomial number of instances, each of which has size bounded polynomially by the size of the solution). To obtain a more robust setting, we restrict our attention to hereditary classes F. It is known that if every component of every graph in F has at most two vertices, then F-Packing is polynomial-time solvable, and NP-hard otherwise. We identify a simple combinatorial property (every component of every graph in F either has bounded size or is a bipartite graph with one of the sides having bounded size) such that if a hereditary class F has this property, then F-Packing admits a polynomial kernel, and has no polynomial (many-one) kernel otherwise, unless the polynomial hierarchy collapses. Furthermore, if F does not have this property, then F-Packing is either WK[1]-hard, W[1]-hard, or Long Path-hard, giving evidence that it does not admit polynomial Turing kernels either. For F-Subgraph Test, we show that if every graph of a hereditary class F satisfies the property that it is possible to delete a bounded number of vertices such that every remaining component has size at most two, then F-Subgraph Test is solvable in randomized polynomial time and it is NP-hard otherwise. We introduce a combinatorial property called (a, b, c, d)-splittability and show that if every graph in a hereditary class F has this property, then F-Subgraph Test admits a polynomial Turing kernel and it is WK[1]-hard, W[1]-hard, or Long Path-hard otherwise. We do not give a complete characterization of the cases when F-Subgraph Test admits polynomial many-one kernels, but show examples that this question is much more fragile than the characterization for Turing kernels

    Characterizing the easy-to-find subgraphs from the viewpoint of polynomial-time algorithms, kernels, and Turing kernels

    Full text link
    We study two fundamental problems related to finding subgraphs: (1) given graphs G and H, Subgraph Test asks if H is isomorphic to a subgraph of G, (2) given graphs G, H, and an integer t, Packing asks if G contains t vertex-disjoint subgraphs isomorphic to H. For every graph class F, let F-Subgraph Test and F-Packing be the special cases of the two problems where H is restricted to be in F. Our goal is to study which classes F make the two problems tractable in one of the following senses: * (randomized) polynomial-time solvable, * admits a polynomial (many-one) kernel, or * admits a polynomial Turing kernel (that is, has an adaptive polynomial-time procedure that reduces the problem to a polynomial number of instances, each of which has size bounded polynomially by the size of the solution). We identify a simple combinatorial property such that if a hereditary class F has this property, then F-Packing admits a polynomial kernel, and has no polynomial (many-one) kernel otherwise, unless the polynomial hierarchy collapses. Furthermore, if F does not have this property, then F-Packing is either WK[1]-hard, W[1]-hard, or Long Path-hard, giving evidence that it does not admit polynomial Turing kernels either. For F-Subgraph Test, we show that if every graph of a hereditary class F satisfies the property that it is possible to delete a bounded number of vertices such that every remaining component has size at most two, then F-Subgraph Test is solvable in randomized polynomial time and it is NP-hard otherwise. We introduce a combinatorial property called (a,b,c,d)-splittability and show that if every graph in a hereditary class F has this property, then F-Subgraph Test admits a polynomial Turing kernel and it is WK[1]-hard, W[1]-hard, or Long Path-hard, otherwise.Comment: 69 pages, extended abstract to appear in the proceedings of SODA 201

    Multipartite Graph Algorithms for the Analysis of Heterogeneous Data

    Get PDF
    The explosive growth in the rate of data generation in recent years threatens to outpace the growth in computer power, motivating the need for new, scalable algorithms and big data analytic techniques. No field may be more emblematic of this data deluge than the life sciences, where technologies such as high-throughput mRNA arrays and next generation genome sequencing are routinely used to generate datasets of extreme scale. Data from experiments in genomics, transcriptomics, metabolomics and proteomics are continuously being added to existing repositories. A goal of exploratory analysis of such omics data is to illuminate the functions and relationships of biomolecules within an organism. This dissertation describes the design, implementation and application of graph algorithms, with the goal of seeking dense structure in data derived from omics experiments in order to detect latent associations between often heterogeneous entities, such as genes, diseases and phenotypes. Exact combinatorial solutions are developed and implemented, rather than relying on approximations or heuristics, even when problems are exceedingly large and/or difficult. Datasets on which the algorithms are applied include time series transcriptomic data from an experiment on the developing mouse cerebellum, gene expression data measuring acute ethanol response in the prefrontal cortex, and the analysis of a predicted protein-protein interaction network. A bipartite graph model is used to integrate heterogeneous data types, such as genes with phenotypes and microbes with mouse strains. The techniques are then extended to a multipartite algorithm to enumerate dense substructure in multipartite graphs, constructed using data from three or more heterogeneous sources, with applications to functional genomics. Several new theoretical results are given regarding multipartite graphs and the multipartite enumeration algorithm. In all cases, practical implementations are demonstrated to expand the frontier of computational feasibility

    Parameterized Complexity of Biclique Contraction and Balanced Biclique Contraction

    Full text link
    In this work, we initiate the complexity study of Biclique Contraction and Balanced Biclique Contraction. In these problems, given as input a graph G and an integer k, the objective is to determine whether one can contract at most k edges in G to obtain a biclique and a balanced biclique, respectively. We first prove that these problems are NP-complete even when the input graph is bipartite. Next, we study the parameterized complexity of these problems and show that they admit single exponential-time FPT algorithms when parameterized by the number k of edge contractions. Then, we show that Balanced Biclique Contraction admits a quadratic vertex kernel while Biclique Contraction does not admit any polynomial compression (or kernel) under standard complexity-theoretic assumptions

    Twin-Width and Polynomial Kernels

    Get PDF
    We study the existence of polynomial kernels for parameterized problems without a polynomial kernel on general graphs, when restricted to graphs of bounded twin-width. It was previously observed in [Bonnet et al., ICALP\u2721] that the problem k-Independent Set allows no polynomial kernel on graph of bounded twin-width by a very simple argument, which extends to several other problems such as k-Independent Dominating Set, k-Path, k-Induced Path, k-Induced Matching. In this work, we examine the k-Dominating Set and variants of k-Vertex Cover for the existence of polynomial kernels. As a main result, we show that k-Dominating Set does not admit a polynomial kernel on graphs of twin-width at most 4 under a standard complexity-theoretic assumption. The reduction is intricate, especially due to the effort to bring the twin-width down to 4, and it can be tweaked to work for Connected k-Dominating Set and Total k-Dominating Set with a slightly worse bound on the twin-width. On the positive side, we obtain a simple quadratic vertex kernel for Connected k-Vertex Cover and Capacitated k-Vertex Cover on graphs of bounded twin-width. These kernels rely on that graphs of bounded twin-width have Vapnik-Chervonenkis (VC) density 1, that is, for any vertex set X, the number of distinct neighborhoods in X is at most c?|X|, where c is a constant depending only on the twin-width. Interestingly the kernel applies to any graph class of VC density 1, and does not require a witness sequence. We also present a more intricate O(k^{1.5}) vertex kernel for Connected k-Vertex Cover. Finally we show that deciding if a graph has twin-width at most 1 can be done in polynomial time, and observe that most graph optimization/decision problems can be solved in polynomial time on graphs of twin-width at most 1

    Preprocessing Subgraph and Minor Problems: When Does a Small Vertex Cover Help?

    Full text link
    We prove a number of results around kernelization of problems parameterized by the size of a given vertex cover of the input graph. We provide three sets of simple general conditions characterizing problems admitting kernels of polynomial size. Our characterizations not only give generic explanations for the existence of many known polynomial kernels for problems like q-Coloring, Odd Cycle Transversal, Chordal Deletion, Eta Transversal, or Long Path, parameterized by the size of a vertex cover, but also imply new polynomial kernels for problems like F-Minor-Free Deletion, which is to delete at most k vertices to obtain a graph with no minor from a fixed finite set F. While our characterization captures many interesting problems, the kernelization complexity landscape of parameterizations by vertex cover is much more involved. We demonstrate this by several results about induced subgraph and minor containment testing, which we find surprising. While it was known that testing for an induced complete subgraph has no polynomial kernel unless NP is in coNP/poly, we show that the problem of testing if a graph contains a complete graph on t vertices as a minor admits a polynomial kernel. On the other hand, it was known that testing for a path on t vertices as a minor admits a polynomial kernel, but we show that testing for containment of an induced path on t vertices is unlikely to admit a polynomial kernel.Comment: To appear in the Journal of Computer and System Science

    Polynomial growth of concept lattices, canonical bases and generators:: extremal set theory in Formal Concept Analysis

    Get PDF
    We prove that there exist three distinct, comprehensive classes of (formal) contexts with polynomially many concepts. Namely: contexts which are nowhere dense, of bounded breadth or highly convex. Already present in G. Birkhoff's classic monograph is the notion of breadth of a lattice; it equals the number of atoms of a largest boolean suborder. Even though it is natural to define the breadth of a context as being that of its concept lattice, this idea had not been exploited before. We do this and establish many equivalences. Amongst them, it is shown that the breadth of a context equals the size of its largest minimal generator, its largest contranominal-scale subcontext, as well as the Vapnik-Chervonenkis dimension of both its system of extents and of intents. The polynomiality of the aforementioned classes is proven via upper bounds (also known as majorants) for the number of maximal bipartite cliques in bipartite graphs. These are results obtained by various authors in the last decades. The fact that they yield statements about formal contexts is a reward for investigating how two established fields interact, specifically Formal Concept Analysis (FCA) and graph theory. We improve considerably the breadth bound. Such improvement is twofold: besides giving a much tighter expression, we prove that it limits the number of minimal generators. This is strictly more general than upper bounding the quantity of concepts. Indeed, it automatically implies a bound on these, as well as on the number of proper premises. A corollary is that this improved result is a bound for the number of implications in the canonical basis too. With respect to the quantity of concepts, this sharper majorant is shown to be best possible. Such fact is established by constructing contexts whose concept lattices exhibit exactly that many elements. These structures are termed, respectively, extremal contexts and extremal lattices. The usual procedure of taking the standard context allows one to work interchangeably with either one of these two extremal structures. Extremal lattices are equivalently defined as finite lattices which have as many elements as possible, under the condition that they obey two upper limits: one for its number of join-irreducibles, other for its breadth. Subsequently, these structures are characterized in two ways. Our first characterization is done using the lattice perspective. Initially, we construct extremal lattices by the iterated operation of finding smaller, extremal subsemilattices and duplicating their elements. Then, it is shown that every extremal lattice must be obtained through a recursive application of this construction principle. A byproduct of this contribution is that extremal lattices are always meet-distributive. Despite the fact that this approach is revealing, the vicinity of its findings contains unanswered combinatorial questions which are relevant. Most notably, the number of meet-irreducibles of extremal lattices escapes from control when this construction is conducted. Aiming to get a grip on the number of meet-irreducibles, we succeed at proving an alternative characterization of these structures. This second approach is based on implication logic, and exposes an interesting link between number of proper premises, pseudo-extents and concepts. A guiding idea in this scenario is to use implications to construct lattices. It turns out that constructing extremal structures with this method is simpler, in the sense that a recursive application of the construction principle is not needed. Moreover, we obtain with ease a general, explicit formula for the Whitney numbers of extremal lattices. This reveals that they are unimodal, too. Like the first, this second construction method is shown to be characteristic. A particular case of the construction is able to force - with precision - a high number of (in the sense of "exponentially many'') meet-irreducibles. Such occasional explosion of meet-irreducibles motivates a generalization of the notion of extremal lattices. This is done by means of considering a more refined partition of the class of all finite lattices. In this finer-grained setting, each extremal class consists of lattices with bounded breadth, number of join irreducibles and meet-irreducibles as well. The generalized problem of finding the maximum number of concepts reveals itself to be challenging. Instead of attempting to classify these structures completely, we pose questions inspired by Turán's seminal result in extremal combinatorics. Most prominently: do extremal lattices (in this more general sense) have the maximum permitted breadth? We show a general statement in this setting: for every choice of limits (breadth, number of join-irreducibles and meet-irreducibles), we produce some extremal lattice with the maximum permitted breadth. The tools which underpin all the intuitions in this scenario are hypergraphs and exact set covers. In a rather unexpected, but interesting turn of events, we obtain for free a simple and interesting theorem about the general existence of "rich'' subcontexts. Precisely: every context contains an object/attribute pair which, after removed, results in a context with at least half the original number of concepts
    corecore