2 research outputs found

    Linguists Who Use Probabilistic Models Love Them: Quantification in Functional Distributional Semantics

    Full text link
    Functional Distributional Semantics provides a computationally tractable framework for learning truth-conditional semantics from a corpus. Previous work in this framework has provided a probabilistic version of first-order logic, recasting quantification as Bayesian inference. In this paper, I show how the previous formulation gives trivial truth values when a precise quantifier is used with vague predicates. I propose an improved account, avoiding this problem by treating a vague predicate as a distribution over precise predicates. I connect this account to recent work in the Rational Speech Acts framework on modelling generic quantification, and I extend this to modelling donkey sentences. Finally, I explain how the generic quantifier can be both pragmatically complex and yet computationally simpler than precise quantifiers.Comment: To be published in Proceedings of Probability and Meaning 202

    Formalising and specifying underquantification

    No full text
    This paper argues that all subject noun phrases can be given a quantified formalisation in terms of the intersection between their denotation set and the denotation set of their verbal predicate. The majority of subject noun phrases, however, are only implicitely quantified and the task of retrieving the most plausible quantifier for a given NP is non-trivial. We propose a formalisation which captures the underspecification of the quantifier in subject NPs and we show that this formalisation is widely applicable, including in statements involving kinds. We then present a baseline for a quantification resolution system using syntactic features as basis for classification. Although the syntactic baseline provides a respectable 78 % precision, our error analysis shows that obtaining true performance on the task requires information beyond syntax. 1 Quantification resolution Most subject noun phrases in English are not explicitly quantified. Still, humans are able to give them quantificational interpretations in context: 1. Cats are mammals = All cats... 2. Cats have four legs = Most cats..
    corecore