11 research outputs found

    Treebank-based acquisition of wide-coverage, probabilistic LFG resources: project overview, results and evaluation

    Get PDF
    This paper presents an overview of a project to acquire wide-coverage, probabilistic Lexical-Functional Grammar (LFG) resources from treebanks. Our approach is based on an automatic annotation algorithm that annotates ā€œrawā€ treebank trees with LFG f-structure information approximating to basic predicate-argument/dependency structure. From the f-structure-annotated treebank we extract probabilistic unification grammar resources. We present the annotation algorithm, the extraction of lexical information and the acquisition of wide-coverage and robust PCFG-based LFG approximations including long-distance dependency resolution. We show how the methodology can be applied to multilingual, treebank-based unification grammar acquisition. Finally we show how simple (quasi-)logical forms can be derived automatically from the f-structures generated for the treebank trees

    Large-scale induction and evaluation of lexical resources from the Penn-II treebank

    Get PDF
    In this paper we present a methodology for extracting subcategorisation frames based on an automatic LFG f-structure annotation algorithm for the Penn-II Treebank. We extract abstract syntactic function-based subcategorisation frames (LFG semantic forms), traditional CFG categorybased subcategorisation frames as well as mixed function/category-based frames, with or without preposition information for obliques and particle information for particle verbs. Our approach does not predefine frames, associates probabilities with frames conditional on the lemma, distinguishes between active and passive frames, and fully reflects the effects of long-distance dependencies in the source data structures. We extract 3586 verb lemmas, 14348 semantic form types (an average of 4 per lemma) with 577 frame types. We present a large-scale evaluation of the complete set of forms extracted against the full COMLEX resource

    Treebank-Based Deep Grammar Acquisition for French Probabilistic Parsing Resources

    Get PDF
    Motivated by the expense in time and other resources to produce hand-crafted grammars, there has been increased interest in wide-coverage grammars automatically obtained from treebanks. In particular, recent years have seen a move towards acquiring deep (LFG, HPSG and CCG) resources that can represent information absent from simple CFG-type structured treebanks and which are considered to produce more language-neutral linguistic representations, such as syntactic dependency trees. As is often the case in early pioneering work in natural language processing, English has been the focus of attention in the first efforts towards acquiring treebank-based deep-grammar resources, followed by treatments of, for example, German, Japanese, Chinese and Spanish. However, to date no comparable large-scale automatically acquired deep-grammar resources have been obtained for French. The goal of the research presented in this thesis is to develop, implement, and evaluate treebank-based deep-grammar acquisition techniques for French. Along the way towards achieving this goal, this thesis presents the derivation of a new treebank for French from the Paris 7 Treebank, the Modified French Treebank, a cleaner, more coherent treebank with several transformed structures and new linguistic analyses. Statistical parsers trained on this data outperform those trained on the original Paris 7 Treebank, which has five times the amount of data. The Modified French Treebank is the data source used for the development of treebank-based automatic deep-grammar acquisition for LFG parsing resources for French, based on an f-structure annotation algorithm for this treebank. LFG CFG-based parsing architectures are then extended and tested, achieving a competitive best f-score of 86.73% for all features. The CFG-based parsing architectures are then complemented with an alternative dependency-based statistical parsing approach, obviating the CFG-based parsing step, and instead directly parsing strings into f-structures

    Parsing with automatically acquired, wide-coverage, robust, probabilistic LFG approximations

    Get PDF
    Traditionally, rich, constraint-based grammatical resources have been hand-coded. Scaling such resources beyond toy fragments to unrestricted, real text is knowledge-intensive, timeconsuming and expensive. The work reported in this thesis is part of a larger project to automate as much as possible the construction of wide-coverage, deep, constraint-based grammatical resources from treebanks. The Penn-II treebank is a large collection of parse-annotated newspaper text. We have designed a Lexical-Functional Grammar (LFG) (Kaplan and Bresnan, 1982) f-structure annotation algorithm to automatically annotate this treebank with f-structure information approximating to basic predicate-argument or dependency structures (Cahill et al., 2002c, 2004a). We then use the f-structure-annotated treebank resource to automatically extract grammars and lexical resources for parsing new text into f-structures. We have designed and implemented the Treebank Tool Suite (TTS) to support the linguistic work that seeds the automatic f-structure annotation algorithm (Cahill and van Genabith, 2002) and the F-Structure Annotation Tool (FSAT) to validate and visualise the results of automatic f-structure annotation. We have designed and implemented two PCFG-based probabilistic parsing architectures for parsing unseen text into f-structures: the pipeline and the integrated model. Both architectures parse raw text into basic, but possibly incomplete, predicate-argument structures (ā€œproto f-structuresā€) with long distance dependencies (LDDs) unresolved (Cahill et al., 2002c). We have designed and implemented a method for automatically resolving LDDs at f-structure level based on a finite approximation of functional uncertainty equations (Kaplan and Zaenen, 1989) automatically acquired from the f structure-annotated treebank resource (Cahill et al., 2004b). To date, the best result achieved by our own Penn-II induced grammars is a dependency f-score of 80.33% against the PARC 700, an improvement of 0.73% over the best handcrafted grammar of (Kaplan et al., 2004). The processing architecture developed in this thesis is highly flexible: using external, state-of-the-art parsing technologies (Charniak, 2000) in our pipeline model, we achieve a dependency f-score of 81.79% against the PARC 700, an improvement of 2.19% over the results reported in Kaplan et al. (2004). We have also ported our grammar induction methodology to German and the TIGER treebank resource (Cahill et al., 2003a). We have developed a method for treebank-based, wide-coverage, deep, constraintbased grammar acquisition. The resulting PCFG-based LFG approximations parse the Penn-II treebank with wider coverage (measured in terms of complete spanning parse) and parsing results comparable to or better than those achieved by the best hand-crafted grammars, with, we believe, considerably less grammar development effort. We believe that our approach successfully addresses the knowledge-acquisition bottleneck (familiar from rule-based approaches to Al and NLP) in wide-coverage, constraint-based grammar development. Our approach can provide an attractive, wide-coverage, multilingual, deep, constraint-based grammar acquisition paradigm

    Automatic extraction of large-scale multilingual lexical resources

    Get PDF
    In this thesis, I present a methodology for treebank- or parser-based acquisition of lexical resources, in particular sub categorisation frames. The method uses an automatic Lexical Functional Grammar (LFG) f-structure annotation algorithm (Cahill et al., 2002a, 2004a; Burke et al., 2004b) and has been applied to the Penn-II and Penn-III treebanks (Marcus et al., 1994) with a total of about 1.3 million words as well as to (a subset of) the British National Corpus (Bernard, 2002) with about 90 million words. I extract abstract syntactic function-based subcategorisation frames (LFG semantic forms), traditional CFG category-based subcategorisation frames as well as mixed function/category-based frames, with or without preposition information for obliques and particle information for subcategorised particles. The approach distinguishes between active and passive frames, and reflects the effects of long-distance dependencies (LDDs) in the source d ata structures. Frames are associated with conditional probabilities, facilitating the optimisation of the extracted lexicon for quality or coverage through filtering. In contrast to many other approaches, subcategorisation frame types are not predefined but acquired from the source data. I carried out large-scale evaluations of the complete set of forms extracted against the COMLEX and OALD resources. To my knowledge, this is the largest and most complete evaluation of subcategorisation frames for English. The parser-based system is also evaluated against Korhonen (2002) with a statistically significant improvement over the previous best score. The automatic annotation methodology, as well as the grammar and lexicon extraction techniques for English have been successfully migrated to Spanish, German and Chinese treebanks despite typological differences and variations in treebank encoding. I believe that this approach provides an attractive and efficient multilingual grammar and lexicon development paradigm

    Automated mood boards - Ontology-based semantic image retrieval

    Get PDF
    The main goal of this research is to support concept designersā€™ search for inspirational and meaningful images in developing mood boards. Finding the right images has become a well-known challenge as the amount of images stored and shared on the Internet and elsewhere keeps increasing steadily and rapidly. The development of image retrieval technologies, which collect, store and pre-process image information to return relevant images instantly in response to usersā€™ needs, have achieved great progress in the last decade. However, the keyword-based content description and query processing techniques for Image Retrieval (IR) currently used have their limitations. Most of these techniques are adapted from the Information Retrieval research, and therefore provide limited capabilities to grasp and exploit conceptualisations due to their inability to handle ambiguity, synonymy, and semantic constraints. Conceptual search (i.e. searching by meaning rather than literal strings) aims to solve the limitations of the keyword-based models. Starting from this point, this thesis investigates the existing IR models, which are oriented to the exploitation of domain knowledge in support of semantic search capabilities, with a focus on the use of lexical ontologies to improve the semantic perspective. It introduces a technique for extracting semantic DNA (SDNA) from textual image annotations and constructing semantic image signatures. The semantic signatures are called semantic chromosomes; they contain semantic information related to the images. Central to the method of constructing semantic signatures is the concept disambiguation technique developed, which identifies the most relevant SDNA by measuring the semantic importance of each word/phrase in the image annotation. In addition, a conceptual model of an ontology-based system for generating visual mood boards is proposed. The proposed model, which is adapted from the Vector Space Model, exploits the use of semantic chromosomes in semantic indexing and assessing the semantic similarity of images within a collection

    The Comlex Syntax project

    No full text

    The Comlex syntax project

    No full text
    Developing more shareable resources to support natural language analysis will make it easier and cheaper to create new language processing applications and to support research in computational linguistics. One natural candidate for such a resource is a broad-coverage dictionary, since the work required to create such a dictionary is large but there is general agreement on at least some of the information to be recorded for each word. The Linguistic Data Consortium has begun an effort to create several such lexical resources, under the rubric "COMLEX " (COMmon LEXicon); one of these projects is the COMLEX Syntax Project. The goal of the COMLEX Syntax Project is to create a moderately-broad-coverage shareable dictionary containing the syntactic features of English words,intended for automatic language analysis. We are initially aiming for a dictionary of 35,000 to 40,000 base forms, although this of course may be enlarged if the initial effort is positively received. The dictionary should include detailed syntactic specifications, particularly for subcategofization; our intent is to provide sufficient detail so that the information required by a number of major English analyzers can be automatically derived from the information we provide. As with other Linguistic Data Consortium resources, our intent is to provide a lexicon available without license constraint to all Consortium members. Finally, our goal is to provide an initial lexicon relatively quickly within about a year, funding permitting. This implies a certain flexibility, where some of the features will probably be changed and refined as the coding is taking place. 1. Some COMLEX History There is a long history of trying to design shareable or "polytheoretic " lexicons and interchange formats for lexicons. There has also been substantial work on adapting machinereadable versions of conventional dictionaries for automated language analysis using a number of systems. It is not our intent to review this work here, but only to indicate how our particular project-- COMLEX Syntax-- got started. The initial impetus was provided by Charles Wayne, the DARPA/SISTO program manager, in discussions at a meetin
    corecore