2,252 research outputs found

    Computing the Longest Common Prefix of a Context-free Language in Polynomial Time

    Get PDF
    We present two structural results concerning the longest common prefixes of non-empty languages. First, we show that the longest common prefix of the language generated by a context-free grammar of size N equals the longest common prefix of the same grammar where the heights of the derivation trees are bounded by 4N. Second, we show that each non-empty language L has a representative subset of at most three elements which behaves like L w.r.t. the longest common prefix as well as w.r.t. longest common prefixes of L after unions or concatenations with arbitrary other languages. From that, we conclude that the longest common prefix, and thus the longest common suffix, of a context-free language can be computed in polynomial time

    Deciding Equivalence of Linear Tree-to-Word Transducers in Polynomial Time

    Get PDF
    We show that the equivalence of deterministic linear top-down tree-to-word transducers is decidable in polynomial time. Linear tree-to-word transducers are non-copying but not necessarily order-preserving and can be used to express XML and other document transformations. The result is based on a partial normal form that provides a basic characterization of the languages produced by linear tree-to-word transducers.Comment: short version of this paper will be published in the proceedings of the 20th Conference on Developments in Language Theory (DLT 2016), Montreal, Canad

    It Is NL-complete to Decide Whether a Hairpin Completion of Regular Languages Is Regular

    Full text link
    The hairpin completion is an operation on formal languages which is inspired by the hairpin formation in biochemistry. Hairpin formations occur naturally within DNA-computing. It has been known that the hairpin completion of a regular language is linear context-free, but not regular, in general. However, for some time it is was open whether the regularity of the hairpin completion of a regular language is is decidable. In 2009 this decidability problem has been solved positively by providing a polynomial time algorithm. In this paper we improve the complexity bound by showing that the decision problem is actually NL-complete. This complexity bound holds for both, the one-sided and the two-sided hairpin completions

    Streamability of nested word transductions

    Full text link
    We consider the problem of evaluating in streaming (i.e., in a single left-to-right pass) a nested word transduction with a limited amount of memory. A transduction T is said to be height bounded memory (HBM) if it can be evaluated with a memory that depends only on the size of T and on the height of the input word. We show that it is decidable in coNPTime for a nested word transduction defined by a visibly pushdown transducer (VPT), if it is HBM. In this case, the required amount of memory may depend exponentially on the height of the word. We exhibit a sufficient, decidable condition for a VPT to be evaluated with a memory that depends quadratically on the height of the word. This condition defines a class of transductions that strictly contains all determinizable VPTs

    Transition Property For Cube-Free Words

    Full text link
    We study cube-free words over arbitrary non-unary finite alphabets and prove the following structural property: for every pair (u,v)(u,v) of dd-ary cube-free words, if uu can be infinitely extended to the right and vv can be infinitely extended to the left respecting the cube-freeness property, then there exists a "transition" word ww over the same alphabet such that uwvuwv is cube free. The crucial case is the case of the binary alphabet, analyzed in the central part of the paper. The obtained "transition property", together with the developed technique, allowed us to solve cube-free versions of three old open problems by Restivo and Salemi. Besides, it has some further implications for combinatorics on words; e.g., it implies the existence of infinite cube-free words of very big subword (factor) complexity.Comment: 14 pages, 5 figure

    Deciding Regularity of Hairpin Completions of Regular Languages in Polynomial Time

    Get PDF
    The hairpin completion is an operation on formal languages that has been inspired by the hairpin formation in DNA biochemistry and by DNA computing. In this paper we investigate the hairpin completion of regular languages. It is well known that hairpin completions of regular languages are linear context-free and not necessarily regular. As regularity of a (linear) context-free language is not decidable, the question arose whether regularity of a hairpin completion of regular languages is decidable. We prove that this problem is decidable and we provide a polynomial time algorithm. Furthermore, we prove that the hairpin completion of regular languages is an unambiguous linear context-free language and, as such, it has an effectively computable growth function. Moreover, we show that the growth of the hairpin completion is exponential if and only if the growth of the underlying languages is exponential and, in case the hairpin completion is regular, then the hairpin completion and the underlying languages have the same growth indicator

    Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability

    Full text link
    Previously referred to as `miraculous' in the scientific literature because of its powerful properties and its wide application as optimal solution to the problem of induction/inference, (approximations to) Algorithmic Probability (AP) and the associated Universal Distribution are (or should be) of the greatest importance in science. Here we investigate the emergence, the rates of emergence and convergence, and the Coding-theorem like behaviour of AP in Turing-subuniversal models of computation. We investigate empirical distributions of computing models in the Chomsky hierarchy. We introduce measures of algorithmic probability and algorithmic complexity based upon resource-bounded computation, in contrast to previously thoroughly investigated distributions produced from the output distribution of Turing machines. This approach allows for numerical approximations to algorithmic (Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a computational hierarchy. We demonstrate that all these estimations are correlated in rank and that they converge both in rank and values as a function of computational power, despite fundamental differences between computational models. In the context of natural processes that operate below the Turing universal level because of finite resources and physical degradation, the investigation of natural biases stemming from algorithmic rules may shed light on the distribution of outcomes. We show that up to 60\% of the simplicity/complexity bias in distributions produced even by the weakest of the computational models can be accounted for by Algorithmic Probability in its approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity calculator: http://complexitycalculator.com
    corecore