4,633 research outputs found
Treating Coordination with Datalog Grammars
In previous work we studied a new type of DCGs, Datalog grammars, which are
inspired on database theory. Their efficiency was shown to be better than that
of their DCG counterparts under (terminating) OLDT-resolution. In this article
we motivate a variant of Datalog grammars which allows us a meta-grammatical
treatment of coordination. This treatment improves in some respects over
previous work on coordination in logic grammars, although more research is
needed for testing it in other respects
Context Update for Lambdas and Vectors
Vector models of language are based on the contextual aspects of words
and how they co-occur in text. Truth conditional models focus on the
logical aspects of language, the denotations of phrases, and their
compositional properties. In the latter approach the denotation of a
sentence determines its truth conditions and can be taken to be a
truth value, a set of possible worlds, a context change
potential, or similar. In this short paper, we develop a vector
semantics for language based on the simply typed lambda calculus. Our
semantics uses techniques familiar from the truth conditional tradition
and is based on a form of dynamic interpretation inspired by
Heim's context updates
From compositional to systematic semantics
We prove a theorem stating that any semantics can be encoded as a
compositional semantics, which means that, essentially, the standard definition
of compositionality is formally vacuous. We then show that when compositional
semantics is required to be "systematic" (that is, the meaning function cannot
be arbitrary, but must belong to some class), it is possible to distinguish
between compositional and non-compositional semantics. As a result, we believe
that the paper clarifies the concept of compositionality and opens a
possibility of making systematic formal comparisons of different systems of
grammars.Comment: 11 pp. Latex.
The Grail theorem prover: Type theory for syntax and semantics
As the name suggests, type-logical grammars are a grammar formalism based on
logic and type theory. From the prespective of grammar design, type-logical
grammars develop the syntactic and semantic aspects of linguistic phenomena
hand-in-hand, letting the desired semantics of an expression inform the
syntactic type and vice versa. Prototypical examples of the successful
application of type-logical grammars to the syntax-semantics interface include
coordination, quantifier scope and extraction.This chapter describes the Grail
theorem prover, a series of tools for designing and testing grammars in various
modern type-logical grammars which functions as a tool . All tools described in
this chapter are freely available
Type-driven semantic interpretation and feature dependencies in R-LFG
Once one has enriched LFG's formal machinery with the linear logic mechanisms
needed for semantic interpretation as proposed by Dalrymple et. al., it is
natural to ask whether these make any existing components of LFG redundant. As
Dalrymple and her colleagues note, LFG's f-structure completeness and coherence
constraints fall out as a by-product of the linear logic machinery they propose
for semantic interpretation, thus making those f-structure mechanisms
redundant. Given that linear logic machinery or something like it is
independently needed for semantic interpretation, it seems reasonable to
explore the extent to which it is capable of handling feature structure
constraints as well.
R-LFG represents the extreme position that all linguistically required
feature structure dependencies can be captured by the resource-accounting
machinery of a linear or similiar logic independently needed for semantic
interpretation, making LFG's unification machinery redundant. The goal is to
show that LFG linguistic analyses can be expressed as clearly and perspicuously
using the smaller set of mechanisms of R-LFG as they can using the much larger
set of unification-based mechanisms in LFG: if this is the case then we will
have shown that positing these extra f-structure mechanisms is not
linguistically warranted.Comment: 30 pages, to appear in the the ``Glue Language'' volume edited by
Dalrymple, uses tree-dvips, ipa, epic, eepic, fullnam
- …