38,099 research outputs found

    Accepting grammars and systems

    Get PDF
    We investigate several kinds of regulated rewriting (programmed, matrix, with regular control, ordered, and variants thereof) and of parallel rewriting mechanisms (Lindenmayer systems, uniformly limited Lindenmayer systems, limited Lindenmayer systems and scattered context grammars) as accepting devices, in contrast with the usual generating mode. In some cases, accepting mode turns out to be just as powerful as generating mode, e.g. within the grammars of the Chomsky hierarchy, within random context, regular control, L systems, uniformly limited L systems, scattered context. Most of these equivalences can be proved using a metatheorem on so-called context condition grammars. In case of matrix grammars and programmed grammars without appearance checking, a straightforward construction leads to the desired equivalence result. Interestingly, accepting devices are (strictly) more powerful than their generating counterparts in case of ordered grammars, programmed and matrix grammars with appearance checking (even programmed grammarsm with unconditional transfer), and 1lET0L systems. More precisely, if we admit erasing productions, we arrive at new characterizations of the recursivley enumerable languages, and if we do not admit them, we get new characterizations of the context-sensitive languages. Moreover, we supplement the published literature showing: - The emptiness and membership problems are recursivley solvable for generating ordered grammars, even if we admit erasing productions. - Uniformly limited propagating systems can be simulated by programmed grammars without erasing and without appearance checking, hence the emptiness and membership problems are recursively solvable for such systems. - We briefly discuss the degree of nondeterminism and the degree of synchronization for devices with limited parallelism

    Taking Primitive Optimality Theory Beyond the Finite State

    Full text link
    Primitive Optimality Theory (OTP) (Eisner, 1997a; Albro, 1998), a computational model of Optimality Theory (Prince and Smolensky, 1993), employs a finite state machine to represent the set of active candidates at each stage of an Optimality Theoretic derivation, as well as weighted finite state machines to represent the constraints themselves. For some purposes, however, it would be convenient if the set of candidates were limited by some set of criteria capable of being described only in a higher-level grammar formalism, such as a Context Free Grammar, a Context Sensitive Grammar, or a Multiple Context Free Grammar (Seki et al., 1991). Examples include reduplication and phrasal stress models. Here we introduce a mechanism for OTP-like Optimality Theory in which the constraints remain weighted finite state machines, but sets of candidates are represented by higher-level grammars. In particular, we use multiple context-free grammars to model reduplication in the manner of Correspondence Theory (McCarthy and Prince, 1995), and develop an extended version of the Earley Algorithm (Earley, 1970) to apply the constraints to a reduplicating candidate set.Comment: 11 pages, 5 figures, worksho

    The Tree-Generative Capacity of Combinatory Categorial Grammars

    Get PDF
    The generative capacity of combinatory categorial grammars as acceptors of tree languages is investigated. It is demonstrated that the such obtained tree languages can also be generated by simple monadic context-free tree grammars. However, the subclass of pure combinatory categorial grammars cannot even accept all regular tree languages. Additionally, the tree languages accepted by combinatory categorial grammars with limited rule degrees are characterized: If only application rules are allowed, then they can accept only a proper subset of the regular tree languages, whereas they can accept exactly the regular tree languages once first degree composition rules are permitted

    Descriptional Complexity of Three-Nonterminal Scattered Context Grammars: An Improvement

    Full text link
    Recently, it has been shown that every recursively enumerable language can be generated by a scattered context grammar with no more than three nonterminals. However, in that construction, the maximal number of nonterminals simultaneously rewritten during a derivation step depends on many factors, such as the cardinality of the alphabet of the generated language and the structure of the generated language itself. This paper improves the result by showing that the maximal number of nonterminals simultaneously rewritten during any derivation step can be limited by a small constant regardless of other factors

    A Bibliography on Fuzzy Automata, Grammars and Lanuages

    Get PDF
    This bibliography contains references to papers on fuzzy formal languages, the generation of fuzzy languages by means of fuzzy grammars, the recognition of fuzzy languages by fuzzy automata and machines, as well as some applications of fuzzy set theory to syntactic pattern recognition, linguistics and natural language processing

    Implicit learning of recursive context-free grammars

    Get PDF
    Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning
    • …
    corecore