1,051 research outputs found

    Practical experiments with regular approximation of context-free languages

    Get PDF
    Several methods are discussed that construct a finite automaton given a context-free grammar, including both methods that lead to subsets and those that lead to supersets of the original context-free language. Some of these methods of regular approximation are new, and some others are presented here in a more refined form with respect to existing literature. Practical experiments with the different methods of regular approximation are performed for spoken-language input: hypotheses from a speech recognizer are filtered through a finite automaton.Comment: 28 pages. To appear in Computational Linguistics 26(1), March 200

    Finite Automata for the Sub- and Superword Closure of CFLs: Descriptional and Computational Complexity

    Full text link
    We answer two open questions by (Gruber, Holzer, Kutrib, 2009) on the state-complexity of representing sub- or superword closures of context-free grammars (CFGs): (1) We prove a (tight) upper bound of 2O(n)2^{\mathcal{O}(n)} on the size of nondeterministic finite automata (NFAs) representing the subword closure of a CFG of size nn. (2) We present a family of CFGs for which the minimal deterministic finite automata representing their subword closure matches the upper-bound of 22O(n)2^{2^{\mathcal{O}(n)}} following from (1). Furthermore, we prove that the inequivalence problem for NFAs representing sub- or superword-closed languages is only NP-complete as opposed to PSPACE-complete for general NFAs. Finally, we extend our results into an approximation method to attack inequivalence problems for CFGs

    Polynomial Time Algorithms for Multi-Type Branching Processes and Stochastic Context-Free Grammars

    Get PDF
    We show that one can approximate the least fixed point solution for a multivariate system of monotone probabilistic polynomial equations in time polynomial in both the encoding size of the system of equations and in log(1/\epsilon), where \epsilon > 0 is the desired additive error bound of the solution. (The model of computation is the standard Turing machine model.) We use this result to resolve several open problems regarding the computational complexity of computing key quantities associated with some classic and heavily studied stochastic processes, including multi-type branching processes and stochastic context-free grammars

    Fragment Grammars: Exploring Computation and Reuse in Language

    Get PDF
    Language relies on a division of labor between stored units and structure building operations which combine the stored units into larger structures. This division of labor leads to a tradeoff: more structure-building means less need to store while more storage means less need to compute structure. We develop a hierarchical Bayesian model called fragment grammar to explore the optimum balance between structure-building and reuse. The model is developed in the context of stochastic functional programming (SFP) and in particular using a probabilistic variant of Lisp known as the Church programming language (Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2008). We show how to formalize several probabilistic models of language structure using Church, and how fragment grammar generalizes one of them---adaptor grammars (Johnson, Griffiths, & Goldwater, 2007). We conclude with experimental data with adults and preliminary evaluations of the model on natural language corpus data

    Efficient Patterns for Model Checking Partial State Spaces in CTL & LTL

    Get PDF
    Compositional model checks of partial Kripke structures are efficient but incomplete as they may fail to recognize that all implementations satisfy the checked property. But if a property holds for such checks, it will hold in all implementations. Such checks are therefore under-approximations. In this paper we determine for which popular specification patterns, documented at a communityled pattern repository, this under-approximation is precise in that the converse relationship holds as well for all model checks. We find that many such patterns are indeed precise. Those that arent lose precision because of a sole propositional atom in mixed polarity. Hence we can compute, with linear blowup only, a semantic minimization in the same temporal logic whose efficient check renders the precise result for the original imprecise pattern. Thus precision can be secured for all patterns at low cost. © 2006 Elsevier B.V. All rights reserved

    Equilibria, Fixed Points, and Complexity Classes

    Get PDF
    Many models from a variety of areas involve the computation of an equilibrium or fixed point of some kind. Examples include Nash equilibria in games; market equilibria; computing optimal strategies and the values of competitive games (stochastic and other games); stable configurations of neural networks; analysing basic stochastic models for evolution like branching processes and for language like stochastic context-free grammars; and models that incorporate the basic primitives of probability and recursion like recursive Markov chains. It is not known whether these problems can be solved in polynomial time. There are certain common computational principles underlying different types of equilibria, which are captured by the complexity classes PLS, PPAD, and FIXP. Representative complete problems for these classes are respectively, pure Nash equilibria in games where they are guaranteed to exist, (mixed) Nash equilibria in 2-player normal form games, and (mixed) Nash equilibria in normal form games with 3 (or more) players. This paper reviews the underlying computational principles and the corresponding classes

    Using parametric set constraints for locating errors in CLP programs

    Full text link
    This paper introduces a framework of parametric descriptive directional types for constraint logic programming (CLP). It proposes a method for locating type errors in CLP programs and presents a prototype debugging tool. The main technique used is checking correctness of programs w.r.t. type specifications. The approach is based on a generalization of known methods for proving correctness of logic programs to the case of parametric specifications. Set-constraint techniques are used for formulating and checking verification conditions for (parametric) polymorphic type specifications. The specifications are expressed in a parametric extension of the formalism of term grammars. The soundness of the method is proved and the prototype debugging tool supporting the proposed approach is illustrated on examples. The paper is a substantial extension of the previous work by the same authors concerning monomorphic directional types.Comment: 64 pages, To appear in Theory and Practice of Logic Programmin

    04241 Abstracts Collection -- Graph Transformations and Process Algebras for Modeling Distributed and Mobile Systems

    Get PDF
    Recently there has been a lot of research, combining concepts of process algebra with those of the theory of graph grammars and graph transformation systems. Both can be viewed as general frameworks in which one can specify and reason about concurrent and distributed systems. There are many areas where both theories overlap and this reaches much further than just using graphs to give a graphic representation to processes. Processes in a communication network can be seen in two different ways: as terms in an algebraic theory, emphasizing their behaviour and their interaction with the environment, and as nodes (or edges) in a graph, emphasizing their topology and their connectedness. Especially topology, mobility and dynamic reconfigurations at runtime can be modelled in a very intuitive way using graph transformation. On the other hand the definition and proof of behavioural equivalences is often easier in the process algebra setting. Also standard techniques of algebraic semantics for universal constructions, refinement and compositionality can take better advantage of the process algebra representation. An important example where the combined theory is more convenient than both alternatives is for defining the concurrent (noninterleaving), abstract semantics of distributed systems. Here graph transformations lack abstraction and process algebras lack expressiveness. Another important example is the work on bigraphical reactive systems with the aim of deriving a labelled transitions system from an unlabelled reactive system such that the resulting bisimilarity is a congruence. Here, graphs seem to be a convenient framework, in which this theory can be stated and developed. So, although it is the central aim of both frameworks to model and reason about concurrent systems, the semantics of processes can have a very different flavour in these theories. Research in this area aims at combining the advantages of both frameworks and translating concepts of one theory into the other. The Dagsuthl Seminar, which took place from 06.06. to 11.06.2004, was aimed at bringing together researchers of the two communities in order to share their ideas and develop new concepts. These proceedings4 of the do not only contain abstracts of the talks given at the seminar, but also summaries of topics of central interest. We would like to thank all participants of the seminar for coming and sharing their ideas and everybody who has contributed to the proceedings
    • …
    corecore