11,315 research outputs found

    Semantics of nouns and nominal number

    Get PDF
    In the present paper, I will discuss the semantic structure of nouns and nominal number markers. In particular, I will discuss the question if it is possible to account for the syntactic and semantic formation of nominals in a parallel way, that is I will try to give a compositional account of nominal semantics. The framework that I will use is "twolevel semantics". The semantic representations and their type-theoretical basis will account for general cross-linguistic characteristics of nouns and nominal number and will show interdependencies between noun classes, number marking and cardinal constructions. While the analysis will give a unified account of bare nouns (like dog / water), it will distinguish between the different kinds of nominal terms (like a dog / dogs / water). Following the proposal, the semantic operations underlying the formation of the SR are basically the same for DPs as for CPs. Hence, from such an analysis, independent semantic arguments can be derived for a structural parallelism of nominals and sentences - that is, for the "sentential aspect" of noun phrases. I will first give a sketch of the theoretical background. I will then discuss the cross-linguistic combinatorial potential of nominal constructions, that is, the potential of nouns and number markers to combine with other elements and form complex expressions. This will lead to a general type-theoretical classification for the elements in question. In the next step, I will model the referential potential of nominal constructions. Together with the combinatorial potential, this will give us semantic representations for the basic elements involved in nominal constructions. In an overview, I will summarize our modeling of nouns and nominal number. I will then discuss in an outlook the "sentential aspect" of noun phrases

    Crowdsourcing Question-Answer Meaning Representations

    Full text link
    We introduce Question-Answer Meaning Representations (QAMRs), which represent the predicate-argument structure of a sentence as a set of question-answer pairs. We also develop a crowdsourcing scheme to show that QAMRs can be labeled with very little training, and gather a dataset with over 5,000 sentences and 100,000 questions. A detailed qualitative analysis demonstrates that the crowd-generated question-answer pairs cover the vast majority of predicate-argument relationships in existing datasets (including PropBank, NomBank, QA-SRL, and AMR) along with many previously under-resourced ones, including implicit arguments and relations. The QAMR data and annotation code is made publicly available to enable future work on how best to model these complex phenomena.Comment: 8 pages, 6 figures, 2 table

    What's in a compound? Review article on Lieber and Ć tekauer (eds) 2009. 'The Oxford Handbook of Compounding'

    Get PDF
    The Oxford Handbook of Compounding surveys a variety of theoretical and descriptive issues, presenting overviews of compounding in a number of frameworks and sketches of compounding in a number of languages. Much of the book deals with Germanic noun–noun compounding. I take up some of the theoretical questions raised surrounding such constructions, in particular, the notion of attributive modification in noun-headed compounds. I focus on two issues. The first is the semantic relation between the head noun and its nominal modifier. Several authors repeat the argument that there is a small(-ish) fixed number of general semantic relations in noun–noun compounds (‘Lees's solution’), but I argue that the correct way to look at such compounds is what I call ‘Downing's solution’, in which we assume that the relation is specified pragmatically, and hence could be any relation at all. The second issue is the way that adjectives modify nouns inside compounds. Although there are languages in which compounded adjectives modify just as they do in phrases (Chukchee, Arleplog Swedish), in general the adjective has a classifier role and not that of a compositional attributive modifier. Thus, even if an English (or German) adjective–noun compound looks compositional, it isn't

    A generic tool to generate a lexicon for NLP from Lexicon-Grammar tables

    Get PDF
    Lexicon-Grammar tables constitute a large-coverage syntactic lexicon but they cannot be directly used in Natural Language Processing (NLP) applications because they sometimes rely on implicit information. In this paper, we introduce LGExtract, a generic tool for generating a syntactic lexicon for NLP from the Lexicon-Grammar tables. It is based on a global table that contains undefined information and on a unique extraction script including all operations to be performed for all tables. We also present an experiment that has been conducted to generate a new lexicon of French verbs and predicative nouns

    A Type-coherent, Expressive Representation as an Initial Step to Language Understanding

    Full text link
    A growing interest in tasks involving language understanding by the NLP community has led to the need for effective semantic parsing and inference. Modern NLP systems use semantic representations that do not quite fulfill the nuanced needs for language understanding: adequately modeling language semantics, enabling general inferences, and being accurately recoverable. This document describes underspecified logical forms (ULF) for Episodic Logic (EL), which is an initial form for a semantic representation that balances these needs. ULFs fully resolve the semantic type structure while leaving issues such as quantifier scope, word sense, and anaphora unresolved; they provide a starting point for further resolution into EL, and enable certain structural inferences without further resolution. This document also presents preliminary results of creating a hand-annotated corpus of ULFs for the purpose of training a precise ULF parser, showing a three-person pairwise interannotator agreement of 0.88 on confident annotations. We hypothesize that a divide-and-conquer approach to semantic parsing starting with derivation of ULFs will lead to semantic analyses that do justice to subtle aspects of linguistic meaning, and will enable construction of more accurate semantic parsers.Comment: Accepted for publication at The 13th International Conference on Computational Semantics (IWCS 2019

    Token-based typology and word order entropy: A study based on universal dependencies

    No full text
    The present paper discusses the benefits and challenges of token-based typology, which takes into account the frequencies of words and constructions in language use. This approach makes it possible to introduce new criteria for language classification, which would be difficult or impossible to achieve with the traditional, type-based approach. This point is illustrated by several quantitative studies of word order variation, which can be measured as entropy at different levels of granularity. I argue that this variation can be explained by general functional mechanisms and pressures, which manifest themselves in language use, such as optimization of processing (including avoidance of ambiguity) and grammaticalization of predictable units occurring in chunks. The case studies are based on multilingual corpora, which have been parsed using the Universal Dependencies annotation scheme
    • 

    corecore