29,086 research outputs found

    Acquiring Word-Meaning Mappings for Natural Language Interfaces

    Full text link
    This paper focuses on a system, WOLFIE (WOrd Learning From Interpreted Examples), that acquires a semantic lexicon from a corpus of sentences paired with semantic representations. The lexicon learned consists of phrases paired with meaning representations. WOLFIE is part of an integrated system that learns to transform sentences into representations such as logical database queries. Experimental results are presented demonstrating WOLFIE's ability to learn useful lexicons for a database interface in four different natural languages. The usefulness of the lexicons learned by WOLFIE are compared to those acquired by a similar system, with results favorable to WOLFIE. A second set of experiments demonstrates WOLFIE's ability to scale to larger and more difficult, albeit artificially generated, corpora. In natural language acquisition, it is difficult to gather the annotated data needed for supervised learning; however, unannotated data is fairly plentiful. Active learning methods attempt to select for annotation and training only the most informative examples, and therefore are potentially very useful in natural language applications. However, most results to date for active learning have only considered standard classification tasks. To reduce annotation effort while maintaining accuracy, we apply active learning to semantic lexicons. We show that active learning can significantly reduce the number of annotated examples required to achieve a given level of performance

    Filling Knowledge Gaps in a Broad-Coverage Machine Translation System

    Full text link
    Knowledge-based machine translation (KBMT) techniques yield high quality in domains with detailed semantic models, limited vocabulary, and controlled input grammar. Scaling up along these dimensions means acquiring large knowledge resources. It also means behaving reasonably when definitive knowledge is not yet available. This paper describes how we can fill various KBMT knowledge gaps, often using robust statistical techniques. We describe quantitative and qualitative results from JAPANGLOSS, a broad-coverage Japanese-English MT system.Comment: 7 pages, Compressed and uuencoded postscript. To appear: IJCAI-9

    WH-words are not ‘interrogative’ pronouns : the derivation of interrogative interpretations for constituent questions

    Get PDF
    I discuss the status of WH-words for interrogative interpretations, and show that the derivation of constituent questions evolves from a specific interplay of syntactic and semantic representations with pragmatics. I argue that WH-pronouns are not ‘interrogative’. Rather, they are underspecified elements; due to this underspecification, WH-words can form a constitutive part not only of interrogative, but also of exclamative and declarative clauses. WH-words introduce a variable of a particular conceptual domain into the semantic representation. Accordingly, they have to be specified for interpretation. Different WH-contexts give rise to different interpretations. In a cross-linguistic overview, I discuss the characteristic elements contributing to the derivation of interrogatives. I argue that specific particles or their phonologically empty counterparts in the head of CP contribute the interrogative aspect. The speech act of ‘asking’ is then carried out via an intonational contour that identifies a question. By default, this intonational contour operates on interrogative sentences; however, other sentence formats – in particular, those of declarative sentences – are possible as well. The distinction of (a) grammatical (syntactic, semantic and phonological) sentence formats for interrogative and declarative sentences, and (b) intonational contours serving the discrimination of speech acts like questions and assertions, can be related to psychological and neurological evidence

    Grammatical properties of pronouns and their representation : an exposition

    Get PDF
    This volume brings together a cross-section of recent research on the grammar and representation of pronouns, centering around the typology of pronominal paradigms, the generation of syntactic and semantic representations for constructions containing pronouns, and the neurological underpinnings for linguistic distinctions that are relevant for the production and interpretation of these constructions. In this introductory chapter we first give an exposition of our topic (section 2). Taking the interpretation of pronouns as a starting point, we discuss the basic parameters of pronominal representations, and draw a general picture of how morphological, semantic, discourse-pragmatic and syntactic aspects come together. In section 3, we sketch the different domains of research that are concerned with these phenomena, and the particular questions they are interested in, and show how the papers in the present volume fit into the picture. Section 4 gives summaries of the individual papers, and a short synopsis of their main points of convergence

    Natural language processing

    Get PDF
    Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems
    • …
    corecore