2 research outputs found

    Bayesian Inference Semantics: A Modelling System and A Test Suite

    Get PDF
    We present BIS, a Bayesian Inference Seman- tics, for probabilistic reasoning in natural lan- guage. The current system is based on the framework of Bernardy et al. (2018), but de- parts from it in important respects. BIS makes use of Bayesian learning for inferring a hy- pothesis from premises. This involves estimat- ing the probability of the hypothesis, given the data supplied by the premises of an argument. It uses a syntactic parser to generate typed syn- tactic structures that serve as input to a model generation system. Sentences are interpreted compositionally to probabilistic programs, and the corresponding truth values are estimated using sampling methods. BIS successfully deals with various probabilistic semantic phe- nomena, including frequency adverbs, gener- alised quantifiers, generics, and vague predi- cates. It performs well on a number of interest- ing probabilistic reasoning tasks. It also sus- tains most classically valid inferences (instan- tiation, de Morgan’s laws, etc.). To test BIS we have built an experimental test suite with examples of a range of probabilistic and clas- sical inference patterns

    Bayesian Inference Semantics for Natural Language

    No full text
    In this chapter we present Bayesian Inference Semantics (BIS). This system assigns probability conditions to inferences, and it defines functions for the typed constituents of sentences that generate these conditions compositionally. This framework permits us to capture vagueness through probability distributions for predicates, and the sentential assertions that are constructed from them. Vagueness is, then, a core property of expressions in our account. This allows us to provide natural representations of scalar adjectives and vague classifier terms, while these are problematic for classical semantic theories. Using probability distributions over the definitions of predicates also permits us to handle the sorites paradox in a straightforward way. We sustain the fuzzy boundaries of classifiers through these distributions, without invoking sharp borders between objects to which classifier terms apply, and those to which they do not
    corecore