140 research outputs found
Semantic transfer in Verbmobil
This paper is a detailed discussion of semantic transfer in the context of the Verbmobil Machine Translation project. The use of semantic transfer as a translation mechanism is introduced and justified by comparison with alternative approaches. Some criteria for evaluation of transfer frameworks are discussed and a comparison is made of three different approaches to the representation of translation rules or equivalences. This is followed by a discussion of control of application of transfer rules and interaction with a domain description and inference component
Pragmatics and word meaning
In this paper, we explore the interaction between lexical semantics
and pragmatics.
We argue that linguistic processing is informationally encapsulated and
utilizes
relatively simple ‘taxonomic’ lexical semantic knowledge. On
this basis, defeasible
lexical generalisations deliver defeasible parts of logical form. In contrast,
pragmatic
inference is open-ended and involves arbitrary real-world knowledge. Two
axioms
specify when pragmatic defaults override lexical ones. We demonstrate that
modelling
this interaction allows us to achieve a more refined interpretation of
words in a
discourse context than either the lexicon or pragmatics could do on their
own.</jats:p
Variational Inference for Logical Inference
Functional Distributional Semantics is a framework that aims to learn, from
text, semantic representations which can be interpreted in terms of truth. Here
we make two contributions to this framework. The first is to show how a type of
logical inference can be performed by evaluating conditional probabilities. The
second is to make these calculations tractable by means of a variational
approximation. This approximation also enables faster convergence during
training, allowing us to close the gap with state-of-the-art vector space
models when evaluating on semantic similarity. We demonstrate promising
performance on two tasks.Schiff Foundatio
Generating Referring Expressions in Open Domains
We present an algorithm for generating referring expressions in open domains. Existing algorithms work at the semantic level and assume the availability of a classification for attributes, which is only feasible for restricted domains. Our alternative works at the realisation level, relies on Word-Net synonym and antonym sets, and gives equivalent results on the examples cited in the literature and improved results for examples that prior approaches cannot handle. We believe that ours is also the first algorithm that allows for the incremental incorporation of relations. We present a novel corpus-evaluation using referring expressions from the Penn Wall Street Journal Treebank
Semantic Composition via Probabilistic Model Theory
Semantic composition remains an open problem for vector space models of semantics. In this paper, we explain how the probabilistic graphical model used in the framework of Functional Distributional Semantics can be interpreted as a probabilistic version of model theory. Building on this, we explain how various semantic phenomena can be recast in terms of conditional probabilities in the graphical model. This connection between formal semantics and machine learning is helpful in both directions: it gives us an explicit mechanism for modelling context-dependent meanings (a challenge for formal semantics), and also gives us well-motivated techniques for composing distributed representations (a challenge for distributional semantics). We present results on two datasets that go beyond word similarity, showing how these semantically-motivated techniques improve on the performance of vector models.Schiff Foundatio
- …