1,169 research outputs found

    From left-regular to Greibach normal form grammars

    Get PDF
    Each context-free grammar can be transformed to a context-free grammar in Greibach normal form, that is, a context-free grammar where each right-hand side of a prorfuction begins with a terminal symbol and the remainder of the right-hand side consists of nonterminal symbols. In this short paper we show that for a left-regular grammar G we can obtain a right-regular grammar G’ (which is by definition in Greibach normal form) which left-to-right covers G (in this case left parses of G’ can be mapped by a homomorphism on right parses of G. Moreover, it is possible to obtain a context-free grammar G” in Greibach normal form which right covers the left-regular grammar G (in this case right parses of G” are mapped on right parses of G)

    D-Tree Grammars

    Full text link
    DTG are designed to share some of the advantages of TAG while overcoming some of its limitations. DTG involve two composition operations called subsertion and sister-adjunction. The most distinctive feature of DTG is that, unlike TAG, there is complete uniformity in the way that the two DTG operations relate lexical items: subsertion always corresponds to complementation and sister-adjunction to modification. Furthermore, DTG, unlike TAG, can provide a uniform analysis for em wh-movement in English and Kashmiri, despite the fact that the em wh element in Kashmiri appears in sentence-second position, and not sentence-initial position as in English.Comment: Latex source, needs aclap.sty, 8 pages, to appear in ACL-9

    Automatic annotation of the Penn-treebank with LFG f-structure information

    Get PDF
    Lexical-Functional Grammar f-structures are abstract syntactic representations approximating basic predicate-argument structure. Treebanks annotated with f-structure information are required as training resources for stochastic versions of unification and constraint-based grammars and for the automatic extraction of such resources. In a number of papers (Frank, 2000; Sadler, van Genabith and Way, 2000) have developed methods for automatically annotating treebank resources with f-structure information. However, to date, these methods have only been applied to treebank fragments of the order of a few hundred trees. In the present paper we present a new method that scales and has been applied to a complete treebank, in our case the WSJ section of Penn-II (Marcus et al, 1994), with more than 1,000,000 words in about 50,000 sentences

    Punctuation in Quoted Speech

    Full text link
    Quoted speech is often set off by punctuation marks, in particular quotation marks. Thus, it might seem that the quotation marks would be extremely useful in identifying these structures in texts. Unfortunately, the situation is not quite so clear. In this work, I will argue that quotation marks are not adequate for either identifying or constraining the syntax of quoted speech. More useful information comes from the presence of a quoting verb, which is either a verb of saying or a punctual verb, and the presence of other punctuation marks, usually commas. Using a lexicalized grammar, we can license most quoting clauses as text adjuncts. A distinction will be made not between direct and indirect quoted speech, but rather between adjunct and non-adjunct quoting clauses.Comment: 11 pages, 11 ps figures, Proceedings of SIGPARSE 96 - Punctuation in Computational Linguistic

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto

    Grammars working on layered strings

    Get PDF
    We consider first an operation with strings and languages suggested by superposed windows on the computer screen (as well as by cryptographic systems of Richelieu type): we assume that the strings contain usual symbols as well as a transparent symbol. Superposing two strings (justified to left), we produce a new string consisting of the symbols observable from above. This operation is investigated as an abstract operation on strings, then it is used in building a variant of grammar systems with the component grammars working on the layers of an array of strings. Each grammar can rewrite only symbols in its layer which are observable from above. The language generated in this way consists of strings of the observable symbols, produced at the end of a derivation. The power of several variants of these generative mechanisms is investigated for the case of two layered strings. When a matrix-like control on the work of the component grammars is considered, then a characterization of recursively enumerable languages is obtained
    • 

    corecore