40 research outputs found

    Linear Bounded Composition of Tree-Walking Tree Transducers: Linear Size Increase and Complexity

    Get PDF
    Compositions of tree-walking tree transducers form a hierarchy with respect to the number of transducers in the composition. As main technical result it is proved that any such composition can be realized as a linear bounded composition, which means that the sizes of the intermediate results can be chosen to be at most linear in the size of the output tree. This has consequences for the expressiveness and complexity of the translations in the hierarchy. First, if the computed translation is a function of linear size increase, i.e., the size of the output tree is at most linear in the size of the input tree, then it can be realized by just one, deterministic, tree-walking tree transducer. For compositions of deterministic transducers it is decidable whether or not the translation is of linear size increase. Second, every composition of deterministic transducers can be computed in deterministic linear time on a RAM and in deterministic linear space on a Turing machine, measured in the sum of the sizes of the input and output tree. Similarly, every composition of nondeterministic transducers can be computed in simultaneous polynomial time and linear space on a nondeterministic Turing machine. Their output tree languages are deterministic context-sensitive, i.e., can be recognized in deterministic linear space on a Turing machine. The membership problem for compositions of nondeterministic translations is nondeterministic polynomial time and deterministic linear space. The membership problem for the composition of a nondeterministic and a deterministic tree-walking tree translation (for a nondeterministic IO macro tree translation) is log-space reducible to a context-free language, whereas the membership problem for the composition of a deterministic and a nondeterministic tree-walking tree translation (for a nondeterministic OI macro tree translation) is possibly NP-complete

    On the State Complexity of Partial Derivative Automata For Regular Expressions with Intersection

    Get PDF
    Extended regular expressions (with complement and intersection) are used in many applications due to their succinctness. In particular, regular expressions extended with intersection only (also called semi-extended) can already be exponentially smaller than standard regular expressions or equivalent nondeterministic finite automata (NFA). For practical purposes it is important to study the average behaviour of conversions between these models. In this paper, we focus on the conversion of regular expressions with intersection to nondeterministic finite automata, using partial derivatives and the notion of support. First, we give a tight upper bound of 2O(n) for the worst-case number of states of the resulting partial derivative automaton, where n is the size of the expression. Using the framework of analytic combinatorics, we then establish an upper bound of (1.056 + o(1))n for its asymptotic average-state complexity, which is significantly smaller than the one for the worst case. (c) IFIP International Federation for Information Processing 2016

    Efficient Parallel Path Checking for Linear-Time Temporal Logic With Past and Bounds

    Full text link
    Path checking, the special case of the model checking problem where the model under consideration is a single path, plays an important role in monitoring, testing, and verification. We prove that for linear-time temporal logic (LTL), path checking can be efficiently parallelized. In addition to the core logic, we consider the extensions of LTL with bounded-future (BLTL) and past-time (LTL+Past) operators. Even though both extensions improve the succinctness of the logic exponentially, path checking remains efficiently parallelizable: Our algorithm for LTL, LTL+Past, and BLTL+Past is in AC^1(logDCFL) \subseteq NC

    On the Complexity of Free Word Orders

    Get PDF
    International audienceWe propose some extensions of mildly context-sensitive for- malisms whose aim is to model free word orders in natural languages. We give a detailed analysis of the complexity of the formalisms we propose

    Computational Complexity and Graph Isomorphism

    Get PDF
    The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic, that is, structurally the same. The complexity of graph isomorphism is an open problem and it is one of the few problems in NP which is neither known to be solvable in polynomial time nor NP-complete. It is one of the most researched open problems in theoretical computer science. The foundations of computability theory are in recursion theory and in recursive functions which are an older model of computation than Turing machines. In this master’s thesis we discuss the basics of the recursion theory and the main theorems starting from the axioms. The aim of the second chapter is to define the most important T- and m-reductions and the implication hierarchy between reductions. Different variations of Turing machines include the nondeterministic and oracle Turing machines. They are discussed in the third chapter. A hierarchy of different complexity classes can be created by reducing the available computational resources of recursive functions. The members of this hierarchy include for instance P and NP. There are hundreds of known complexity classes and in this work the most important ones regarding graph isomorphism are introduced. Boolean circuits are a different method for approaching computability. Some main results and complexity classes of circuit complexity are discussed in the fourth chapter. The aim is to show that graph isomorphism is hard for the class DET. Graph isomorphism is known to belong to the classes coAM and SPP. These classes are introduced in the fifth chapter by using theory of probabilistic classes, polynomial hierarchy, interactive proof systems and Arthur-Merlin games. Polynomial hierarchy collapses to its second level if GI is NP-complete
    corecore