394 research outputs found

    Definability Equals Recognizability for kk-Outerplanar Graphs

    Get PDF
    One of the most famous algorithmic meta-theorems states that every graph property that can be defined by a sentence in counting monadic second order logic (CMSOL) can be checked in linear time for graphs of bounded treewidth, which is known as Courcelle's Theorem. These algorithms are constructed as finite state tree automata, and hence every CMSOL-definable graph property is recognizable. Courcelle also conjectured that the converse holds, i.e. every recognizable graph property is definable in CMSOL for graphs of bounded treewidth. We prove this conjecture for kk-outerplanar graphs, which are known to have treewidth at most 3k13k-1.Comment: 40 pages, 8 figure

    Improved self-reduction algorithms for graphs with bounded treewidth

    Get PDF
    AbstractRecent results of Robertson and Seymour show that every class that is closed under taking of minors can be recognized in O(n3) time. If there is a fixed upper bound on the treewidth of the graphs in the class, i.e., if there is a planar graph not in the class, then the class can be recognized in O(n2) time. However, this result is nonconstructive in two ways: the algorithm only decides on membership, but does not construct “a solution”, e.g., a linear ordering, decomposition or embedding; and no method is given to find the algorithms. In many cases, both nonconstructive elements can be avoided, using techniques of Brown (1989) and Fellows and Langston (1989), based on self-reduction. In this paper we introduce two techniques that help to reduce the running time of self-reduction algorithms. With the help of these techniques we show that there exist O(n2) algorithms that decide on membership and construct solutions for treewidth, pathwidth, search number, vertex search number, node search number, cutwidth, modified cutwidth, vertex separation number, gate matrix layout, and progressive black–white pebbling, where in each case the parameter k is a fixed constant

    The Parameterized Complexity Binary CSP for Graphs with a Small Vertex Cover and Related Results

    Full text link
    In this paper, we show that Binary CSP with the size of a vertex cover as parameter is complete for the class W[3]. We obtain a number of related results with variations of the proof techniques, that include: Binary CSP is complete for W[2d+12d+1] with as parameter the size of a vertex modulator to graphs of treedepth cc, or forests of depth dd, for constant c1c\geq 1, W[tt]-hard for all tNt\in \mathbb{N} with treewidth as parameter, and hard for W[SAT] with feedback vertex set as parameter. As corollaries, we give some hardness and membership problems for classes in the W-hierarchy for List Colouring under different parameterisations

    Parameterized complexity of Bandwidth of Caterpillars and Weighted Path Emulation

    Full text link
    In this paper, we show that Bandwidth is hard for the complexity class W[t]W[t] for all tNt\in {\bf N}, even for caterpillars with hair length at most three. As intermediate problem, we introduce the Weighted Path Emulation problem: given a vertex-weighted path PNP_N and integer MM, decide if there exists a mapping of the vertices of PNP_N to a path PMP_M, such that adjacent vertices are mapped to adjacent or equal vertices, and such that the total weight of the image of a vertex from PMP_M equals an integer cc. We show that {\sc Weighted Path Emulation}, with cc as parameter, is hard for W[t]W[t] for all tNt\in {\bf N}, and is strongly NP-complete. We also show that Directed Bandwidth is hard for W[t]W[t] for all tNt\in {\bf N}, for directed acyclic graphs whose underlying undirected graph is a caterpillar.Comment: 31 pages; 9 figure

    Speeding-up Dynamic Programming with Representative Sets - An Experimental Evaluation of Algorithms for Steiner Tree on Tree Decompositions

    Full text link
    Dynamic programming on tree decompositions is a frequently used approach to solve otherwise intractable problems on instances of small treewidth. In recent work by Bodlaender et al., it was shown that for many connectivity problems, there exist algorithms that use time, linear in the number of vertices, and single exponential in the width of the tree decomposition that is used. The central idea is that it suffices to compute representative sets, and these can be computed efficiently with help of Gaussian elimination. In this paper, we give an experimental evaluation of this technique for the Steiner Tree problem. A comparison of the classic dynamic programming algorithm and the improved dynamic programming algorithm that employs the table reduction shows that the new approach gives significant improvements on the running time of the algorithm and the size of the tables computed by the dynamic programming algorithm, and thus that the rank based approach from Bodlaender et al. does not only give significant theoretical improvements but also is a viable approach in a practical setting, and showcases the potential of exploiting the idea of representative sets for speeding up dynamic programming algorithms

    On Exploring Temporal Graphs of Small Pathwidth

    Get PDF
    We show that the Temporal Graph Exploration Problem is NP-complete, even when the underlying graph has pathwidth 2 and at each time step, the current graph is connected

    Dynamic Sampling from a Discrete Probability Distribution with a Known Distribution of Rates

    Get PDF
    In this paper, we consider a number of efficient data structures for the problem of sampling from a dynamically changing discrete probability distribution, where some prior information is known on the distribution of the rates, in particular the maximum and minimum rate, and where the number of possible outcomes N is large. We consider three basic data structures, the Acceptance-Rejection method, the Complete Binary Tree and the Alias Method. These can be used as building blocks in a multi-level data structure, where at each of the levels, one of the basic data structures can be used. Depending on assumptions on the distribution of the rates of outcomes, different combinations of the basic structures can be used. We prove that for particular data structures the expected time of sampling and update is constant, when the rates follow a non-decreasing distribution, log-uniform distribution or an inverse polynomial distribution, and show that for any distribution, an expected time of sampling and update of O(loglogrmax/rmin)O\left(\log\log{r_{max}}/{r_{min}}\right) is possible, where rmaxr_{max} is the maximum rate and rminr_{min} the minimum rate. We also present an experimental verification, highlighting the limits given by the constraints of a real-life setting

    Cross-Composition: A New Technique for Kernelization Lower Bounds

    Get PDF
    We introduce a new technique for proving kernelization lower bounds, called cross-composition. A classical problem L cross-composes into a parameterized problem Q if an instance of Q with polynomially bounded parameter value can express the logical OR of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a result by Fortnow and Santhanam (STOC 2008) we show that if an NP-complete problem cross-composes into a parameterized problem Q then Q does not admit a polynomial kernel unless the polynomial hierarchy collapses. Our technique generalizes and strengthens the recent techniques of using OR-composition algorithms and of transferring the lower bounds via polynomial parameter transformations. We show its applicability by proving kernelization lower bounds for a number of important graphs problems with structural (non-standard) parameterizations, e.g., Chromatic Number, Clique, and Weighted Feedback Vertex Set do not admit polynomial kernels with respect to the vertex cover number of the input graphs unless the polynomial hierarchy collapses, contrasting the fact that these problems are trivially fixed-parameter tractable for this parameter. We have similar lower bounds for Feedback Vertex Set.Comment: Updated information based on final version submitted to STACS 201

    Kernelization Lower Bounds By Cross-Composition

    Full text link
    We introduce the cross-composition framework for proving kernelization lower bounds. A classical problem L AND/OR-cross-composes into a parameterized problem Q if it is possible to efficiently construct an instance of Q with polynomially bounded parameter value that expresses the logical AND or OR of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a result by Fortnow and Santhanam (STOC 2008) with a refinement by Dell and van Melkebeek (STOC 2010), we show that if an NP-hard problem OR-cross-composes into a parameterized problem Q then Q does not admit a polynomial kernel unless NP \subseteq coNP/poly and the polynomial hierarchy collapses. Similarly, an AND-cross-composition for Q rules out polynomial kernels for Q under Bodlaender et al.'s AND-distillation conjecture. Our technique generalizes and strengthens the recent techniques of using composition algorithms and of transferring the lower bounds via polynomial parameter transformations. We show its applicability by proving kernelization lower bounds for a number of important graphs problems with structural (non-standard) parameterizations, e.g., Clique, Chromatic Number, Weighted Feedback Vertex Set, and Weighted Odd Cycle Transversal do not admit polynomial kernels with respect to the vertex cover number of the input graphs unless the polynomial hierarchy collapses, contrasting the fact that these problems are trivially fixed-parameter tractable for this parameter. After learning of our results, several teams of authors have successfully applied the cross-composition framework to different parameterized problems. For completeness, our presentation of the framework includes several extensions based on this follow-up work. For example, we show how a relaxed version of OR-cross-compositions may be used to give lower bounds on the degree of the polynomial in the kernel size.Comment: A preliminary version appeared in the proceedings of the 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011) under the title "Cross-Composition: A New Technique for Kernelization Lower Bounds". Several results have been strengthened compared to the preliminary version (http://arxiv.org/abs/1011.4224). 29 pages, 2 figure
    corecore