13,640 research outputs found
Coping with Uncertainty: Noun Phrase Interpretation and Early Semantic Analysis
A computer program which can "understand" natural language texts must
have both syntactic knowledge about the language concerned and
semantic knowledge of how what is written relates to its internal
representation of the world. It has been a matter of some controversy
how these sources of information can best be integrated to translate
from an input text to a formal meaning representation. The
controversy has concerned largely the question as to what degree of
syntactic analysis must be performed before any semantic analysis can
take place. An extreme position in this debate is that a syntactic
parse tree for a complete sentence must be produced before any
investigation of that sentence's meaning is appropriate. This
position has been criticised by those who see understanding as a
process that takes place gradually as the text is read, rather than
in sudden bursts of activity at the ends of sentences. These people
advocate a model where semantic analysis can operate on fragments of
text before the global syntactic structure is determined - a strategy
which we will call early semantic analysis.
In this thesis, we investigate the implications of early semantic
analysis in the interpretation of noun phrases. One possible approach
is to say that a noun phrase is a self-contained unit and can be
fully interpreted by the time it has been read. Thus it can always be
determined what objects a noun phrase refers to without consulting
much more than the structure of the phrase itself. This approach was
taken in part by Winograd [Winograd 72], who saw the constraint that
a noun phrase have a referent as a valuable aid in resolving local
syntactic ambiguity. Unfortunately, Winograd's work has been
criticised by Ritchie, because it is not always possible to determine
what a noun phrase refers to purely on the basis of local
information. In this thesis, we will go further than this and claim
that, because the meaning of a noun phrase can be affected by so many
factors outside the phrase itself, it makes no sense to talk about
"the referent" as a function of -a noun phrase. Instead, the notion
of "referent" is something defined by global issues of structure and
consistency.
Having rejected one approach to the early semantic analysis of noun
phrases, we go on to develop an alternative, which we call
incremental evaluation. The basic idea is that a noun phrase does
provide some information about what it refers to. It should be
possible to represent this partial information and gradually refine it as relevant implications of the context are followed up. Moreover,
the partial information should be available to an inference system,
which, amongst other things, can detect the absence of a referent and
provide the advantages of Winograd's system. In our system, noun
phrase interpretation does take place locally, but the point is that it does not finish there. Instead, the determination of the meaning
of a noun phrase is spread over the subsequent analysis of how it
contributes to the meaning of the text as a whole
Incremental Interpretation: Applications, Theory, and Relationship to Dynamic Semantics
Why should computers interpret language incrementally? In recent years
psycholinguistic evidence for incremental interpretation has become more and
more compelling, suggesting that humans perform semantic interpretation before
constituent boundaries, possibly word by word. However, possible computational
applications have received less attention. In this paper we consider various
potential applications, in particular graphical interaction and dialogue. We
then review the theoretical and computational tools available for mapping from
fragments of sentences to fully scoped semantic representations. Finally, we
tease apart the relationship between dynamic semantics and incremental
interpretation.Comment: Procs. of COLING 94, LaTeX (2.09 preferred), 8 page
Anaphora and Discourse Structure
We argue in this paper that many common adverbial phrases generally taken to
signal a discourse relation between syntactically connected units within
discourse structure, instead work anaphorically to contribute relational
meaning, with only indirect dependence on discourse structure. This allows a
simpler discourse structure to provide scaffolding for compositional semantics,
and reveals multiple ways in which the relational meaning conveyed by adverbial
connectives can interact with that associated with discourse structure. We
conclude by sketching out a lexicalised grammar for discourse that facilitates
discourse interpretation as a product of compositional rules, anaphor
resolution and inference.Comment: 45 pages, 17 figures. Revised resubmission to Computational
Linguistic
SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks
In this paper, we describe a so-called screening approach for learning robust
processing of spontaneously spoken language. A screening approach is a flat
analysis which uses shallow sequences of category representations for analyzing
an utterance at various syntactic, semantic and dialog levels. Rather than
using a deeply structured symbolic analysis, we use a flat connectionist
analysis. This screening approach aims at supporting speech and language
processing by using (1) data-driven learning and (2) robustness of
connectionist networks. In order to test this approach, we have developed the
SCREEN system which is based on this new robust, learned and flat analysis.
In this paper, we focus on a detailed description of SCREEN's architecture,
the flat syntactic and semantic analysis, the interaction with a speech
recognizer, and a detailed evaluation analysis of the robustness under the
influence of noisy or incomplete input. The main result of this paper is that
flat representations allow more robust processing of spontaneous spoken
language than deeply structured representations. In particular, we show how the
fault-tolerance and learning capability of connectionist networks can support a
flat analysis for providing more robust spoken-language processing within an
overall hybrid symbolic/connectionist framework.Comment: 51 pages, Postscript. To be published in Journal of Artificial
Intelligence Research 6(1), 199
Papers on predicative constructions : Proceedings of the workshop on secundary predication, October 16-17, 2000, Berlin
This volume presents a collection of papers touching on various issues concerning the syntax and semantics of predicative constructions.
A hot topic in the study of predicative copula constructions, with direct implications for the treatment of he (how many he's do we need?), and wider implications for the theories of predication, event-based semantics and aspect, is the nature and source of the situation argument. Closer examination of copula-less predications is becoming increasingly relevant to all these issues, as is clearly illustrated by the present collection
Graph Interpolation Grammars: a Rule-based Approach to the Incremental Parsing of Natural Languages
Graph Interpolation Grammars are a declarative formalism with an operational
semantics. Their goal is to emulate salient features of the human parser, and
notably incrementality. The parsing process defined by GIGs incrementally
builds a syntactic representation of a sentence as each successive lexeme is
read. A GIG rule specifies a set of parse configurations that trigger its
application and an operation to perform on a matching configuration. Rules are
partly context-sensitive; furthermore, they are reversible, meaning that their
operations can be undone, which allows the parsing process to be
nondeterministic. These two factors confer enough expressive power to the
formalism for parsing natural languages.Comment: 41 pages, Postscript onl
A Transition-Based Directed Acyclic Graph Parser for UCCA
We present the first parser for UCCA, a cross-linguistically applicable
framework for semantic representation, which builds on extensive typological
work and supports rapid annotation. UCCA poses a challenge for existing parsing
techniques, as it exhibits reentrancy (resulting in DAG structures),
discontinuous structures and non-terminal nodes corresponding to complex
semantic units. To our knowledge, the conjunction of these formal properties is
not supported by any existing parser. Our transition-based parser, which uses a
novel transition set and features based on bidirectional LSTMs, has value not
just for UCCA parsing: its ability to handle more general graph structures can
inform the development of parsers for other semantic DAG structures, and in
languages that frequently use discontinuous structures.Comment: 16 pages; Accepted as long paper at ACL201
- …