68,891 research outputs found

    Linear Compressed Pattern Matching for Polynomial Rewriting (Extended Abstract)

    Full text link
    This paper is an extended abstract of an analysis of term rewriting where the terms in the rewrite rules as well as the term to be rewritten are compressed by a singleton tree grammar (STG). This form of compression is more general than node sharing or representing terms as dags since also partial trees (contexts) can be shared in the compression. In the first part efficient but complex algorithms for detecting applicability of a rewrite rule under STG-compression are constructed and analyzed. The second part applies these results to term rewriting sequences. The main result for submatching is that finding a redex of a left-linear rule can be performed in polynomial time under STG-compression. The main implications for rewriting and (single-position or parallel) rewriting steps are: (i) under STG-compression, n rewriting steps can be performed in nondeterministic polynomial time. (ii) under STG-compression and for left-linear rewrite rules a sequence of n rewriting steps can be performed in polynomial time, and (iii) for compressed rewrite rules where the left hand sides are either DAG-compressed or ground and STG-compressed, and an STG-compressed target term, n rewriting steps can be performed in polynomial time.Comment: In Proceedings TERMGRAPH 2013, arXiv:1302.599

    Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal

    Full text link
    The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most kk of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a \BigOh(4^kkmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed kk. It is known that this implies a polynomial-time compression algorithm that turns OCT instances into equivalent instances of size at most \BigOh(4^k), a so-called kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in kk, has turned into one of the main open questions in the study of kernelization. This work provides the first (randomized) polynomial kernelization for OCT. We introduce a novel kernelization approach based on matroid theory, where we encode all relevant information about a problem instance into a matroid with a representation of size polynomial in kk. For OCT, the matroid is built to allow us to simulate the computation of the iterative compression step of the algorithm of Reed, Smith, and Vetta, applied (for only one round) to an approximate odd cycle transversal which it is aiming to shrink to size kk. The process is randomized with one-sided error exponentially small in kk, where the result can contain false positives but no false negatives, and the size guarantee is cubic in the size of the approximate solution. Combined with an \BigOh(\sqrt{\log n})-approximation (Agarwal et al., STOC 2005), we get a reduction of the instance to size \BigOh(k^{4.5}), implying a randomized polynomial kernelization.Comment: Minor changes to agree with SODA 2012 version of the pape

    Pattern matching of compressed terms and contexts and polynomial rewriting

    Get PDF
    A generalization of the compressed string pattern match that applies to terms with variables is investigated: Given terms s and t compressed by singleton tree grammars, the task is to find an instance of s that occurs as a subterm in t. We show that this problem is in NP and that the task can be performed in time O(ncjVar(s)j), including the construction of the compressed substitution, and a representation of all occurrences. We show that the special case where s is uncompressed can be performed in polynomial time. As a nice application we show that for an equational deduction of t to t0 by an equality axiom l = r (a rewrite) a single step can be performed in polynomial time in the size of compression of t and l; r if the number of variables is fixed in l. We also show that n rewriting steps can be performed in polynomial time, if the equational axioms are compressed and assumed to be constant for the rewriting sequence. Another potential application are querying mechanisms on compressed XML-data bases

    Unification on Compressed Terms

    Get PDF
    First-order term unification is an essential concept in areas like functional and logic programming, automated deduction, deductive databases, artificial intelligence, information retrieval, compiler design, etc. We build upon recent developments in grammar-based compression mechanisms for terms and investigate algorithms for first-order unification and matching on compressed terms. We prove that the first-order unification of compressed terms is decidable in polynomial time, and also that a compressed representation of the most general unifier can be computed in polynomial time. Furthermore, we present a polynomial time algorithm for first-order matching on compressed terms. Both algorithms represent an improvement in time complexity over previous results [GGSS09, GGSS08]. We use several known results on the tree grammars used for compression, called singleton tree grammars (STG)s, like polynomial time computability of several subalgorithmms: certain grammar extensions, deciding equality of represented terms, and generating their preorder traversal. An innovation is a specialized depth of an STG that shows that unifiers can be represented in polynomial spac

    Unification and Matching on Compressed Terms

    Full text link
    Term unification plays an important role in many areas of computer science, especially in those related to logic. The universal mechanism of grammar-based compression for terms, in particular the so-called Singleton Tree Grammars (STG), have recently drawn considerable attention. Using STGs, terms of exponential size and height can be represented in linear space. Furthermore, the term representation by directed acyclic graphs (dags) can be efficiently simulated. The present paper is the result of an investigation on term unification and matching when the terms given as input are represented using different compression mechanisms for terms such as dags and Singleton Tree Grammars. We describe a polynomial time algorithm for context matching with dags, when the number of different context variables is fixed for the problem. For the same problem, NP-completeness is obtained when the terms are represented using the more general formalism of Singleton Tree Grammars. For first-order unification and matching polynomial time algorithms are presented, each of them improving previous results for those problems.Comment: This paper is posted at the Computing Research Repository (CoRR) as part of the process of submission to the journal ACM Transactions on Computational Logic (TOCL)

    Optimal Polynomial-Time Compression for Boolean Max CSP

    Get PDF
    In the Boolean maximum constraint satisfaction problem - Max CSP(?) - one is given a collection of weighted applications of constraints from a finite constraint language ?, over a common set of variables, and the goal is to assign Boolean values to the variables so that the total weight of satisfied constraints is maximized. There exists a concise dichotomy theorem providing a criterion on ? for the problem to be polynomial-time solvable and stating that otherwise it becomes NP-hard. We study the NP-hard cases through the lens of kernelization and provide a complete characterization of Max CSP(?) with respect to the optimal compression size. Namely, we prove that Max CSP(?) parameterized by the number of variables n is either polynomial-time solvable, or there exists an integer d ? 2 depending on ?, such that: 1) An instance of Max CSP(?) can be compressed into an equivalent instance with ?(n^d log n) bits in polynomial time, 2) Max CSP(?) does not admit such a compression to ?(n^{d-?}) bits unless NP ? co-NP / poly. Our reductions are based on interpreting constraints as multilinear polynomials combined with the framework of constraint implementations. As another application of our reductions, we reveal tight connections between optimal running times for solving Max CSP(?). More precisely, we show that obtaining a running time of the form ?(2^{(1-?)n}) for particular classes of Max CSPs is as hard as breaching this barrier for Max d-SAT for some d
    corecore