17 research outputs found

    Distributed parsing with HPSG grammars

    Get PDF
    Unification-based theories of grammar allow for an integration of different levels of linguistic descriptions in the common framework of typed feature structures. Dependencies among the levels are expressed by coreferences. Though highly attractive theoretically, using such codescriptions for analysis create problems of efficiency. We present an approach to a modular use of codescriptions on the syntactic and semantic level. Grammatical analysis is performed by tightly coupled parsers running in tandem, each using only designated parts of the grammatical description. In the paper we describe the partitioning of grammatical information for the parsers and present results about the performance

    Natural Language Dialogue Service for Appointment Scheduling Agents

    Get PDF
    Appointment scheduling is a problem faced daily by many individuals and organizations. Cooperating agent systems have been developed to partially automate this task. In order to extend the circle of participants as far as possible we advocate the use of natural language transmitted by e-mail. We describe COSMA, a fully implemented German language server for existing appointment scheduling agent systems. COSMA can cope with multiple dialogues in parallel, and accounts for differences in dialogue behaviour between human and machine agents. NL coverage of the sublanguage is achieved through both corpus-based grammar development and the use of message extraction techniques.Comment: 8 or 9 pages, LaTeX; uses aclap.sty, epsf.te

    Natural language semantics and compiler technology

    Get PDF
    This paper recommends an approach to the implementation of semantic representation languages (SRLs) which exploits a parallelism between SRLs and programming languages (PLs). The design requirements of SRLs for natural language are similar to those of PLs in their goals. First, in both cases we seek modules in which both the surface representation (print form) and the underlying data structures are important. This requirement highlights the need for general tools allowing the printing and reading of expressions (data structures). Second, these modules need to cooperate with foreign modules, so that the importance of interface technology (compilation) is paramount; and third, both compilers and semantic modules need "inferential" facilities for transforming (simplifying) complex expressions in order to ease subsequent processing. But the most important parallel is the need in both fields for tools which are useful in combination with a variety of concrete languages -- general purpose parsers, printers, simplifiers (transformation facilities) and compilers. This arises in PL technology from (among other things) the need for experimentation in language design, which is again parallel to the case of SRLs. Using a compiler-based approach, we have implemented NLL, a public domain software package for computational natural language semantics. Several interfaces exist both for grammar modules and for applications, using a variety of interface technologies, including especially compilation. We review here a variety of NLL, applications, focusing on COSMA, an NL interface to a distributed appointment manager

    A diagnostic tool for German syntax

    Get PDF
    In this paper we describe an effort to construct a catalogue of syntactic data, exemplifying the major syntactic patterns of German. The purpose of the corpus is to support the diagnosis of errors in the syntactic components of natural language processing (NLP) systems. Two secondary aims are the evaluation of NLP systems components and the support of theoretical and empirical work on German syntax. The data consist of artificially and systematically constructed expressions, including also negative (ungrammatical) examples. The data are organized into a relational data base and annotated with some basic information about the phenomena illustrated and the internal structure of the sample sentences. The organization of the data supports selected systematic testing of specific areas of syntax, but also serves the purpose of a linguistic data base. The paper first gives some general motivation for the necessity of syntactic precision in some areas of NLP and discusses the potential contribution of a syntactic data base to the field of component evaluation. The second part of the paper describes the set up and control methods applied in the construction of the sentence suite and annotations to the examples. We illustrate the approach with the example of verbal government. The section also contains a description of the abstract data model, the design of the data base and the query language used to access the data. The final sections compare our work to existing approaches and sketch some future extensions. We invite other research groups to participate in our effort, so that the diagnostics tool can eventually become public domain

    Interfaces Area: Natural Language and Speech Understanding.

    No full text
    Abstract: We consider communication between modules in an integrated architecture for Speech and Natural Language (NL), in particular the communication with the semantics module. In an integrated Speech/Language system several components—phonology (intonation), syntax, context model—may express meaning constraints, which the semantics module must flexibly manage and evaluate, in order to enable semantic inference. This paper describes an implemented approach in the ASL Project in which nonsemantic modules provide feature-based contraints that are then translated into a meaning representation language. We realize these translator functions in the spirit of federated agents ’ architectures (Genesreth); this functionality is required in heterogenous integrated architectures, and is implemented here using compiler technology

    A Diagnostic Tool for German Syntax

    No full text
    In this paper we describe an ongoing effort to construct a catalogue of syntactic data exemplifying the major syntactic patterns of German. The purpose of the corpus is to support the diagnosis of errors in the syntactic components of natural language processing (NLP) systems. Secondary aims are the evaluation of NLP syntax components and support of theoretical and empirical work on German syntax. The data consist of artificially and systematically constructed expressions, including also negative (ungrammatical) examples. The data are organized into a relational database and annotated with some basic information about the phenomena illustrated and the internal structure of the sample sentences. The organization of the data supports selected systematic testing of specific areas of syntax, but also serves the purpose of a linguistic database. The paper first gives some general motivation for the necessity of syntactic precision in some areas of NLP and discusses the potential contribution of a syntactic database to the field of component evaluation. The second part of the paper describes the set up and control methods applied in the construction of the sentence suite and annotations to the examples. We illustrate the approach with examples from verbal government and sentential coordination. This section also contains a descriptio
    corecore