258 research outputs found
Transition and Parsing State and Incrementality in Dynamic Syntax
PACLIC 21 / Seoul National University, Seoul, Korea / November 1-3, 200
A Syntactic-Semantic Approach to Incremental Verification
Software verification of evolving systems is challenging mainstream
methodologies and tools. Formal verification techniques often conflict with the
time constraints imposed by change management practices for evolving systems.
Since changes in these systems are often local to restricted parts, an
incremental verification approach could be beneficial.
This paper introduces SiDECAR, a general framework for the definition of
verification procedures, which are made incremental by the framework itself.
Verification procedures are driven by the syntactic structure (defined by a
grammar) of the system and encoded as semantic attributes associated with the
grammar. Incrementality is achieved by coupling the evaluation of semantic
attributes with an incremental parsing technique.
We show the application of SiDECAR to the definition of two verification
procedures: probabilistic verification of reliability requirements and
verification of safety properties.Comment: 22 pages, 8 figures. Corrected typo
Lexicalized semi-incremental dependency parsing
Even leaving aside concerns of cognitive plausibility,
incremental parsing is appealing for applications such
as speech recognition and machine translation because
it could allow for incorporating syntactic features into
the decoding process without blowing up the search
space. Yet, incremental parsing is often associated
with greedy parsing decisions and intolerable loss of
accuracy. Would the use of lexicalized grammars provide
a new perspective on incremental parsing? In this paper we explore incremental left-to-right dependency parsing using a lexicalized grammatical formalism that works with lexical categories (supertags) and a small set of combinatory operators. A strictly incremental parser would conduct only a single pass over the input, use no lookahead and make only local decisions at every word. We show that such a parser suffers heavy loss of accuracy. Instead, we explore
the utility of a two-pass approach that incrementally
builds a dependency structure by first assigning a supertag
to every input word and then selecting an incremental
operator that allows assembling every supertag with the dependency structure built so-far to its left. We instantiate this idea in different models that allow
a trade-off between aspects of full incrementality
and performance, and explore the differences between
these models empirically. Our exploration shows that
a semi-incremental (two-pass), linear-time parser that
employs fixed and limited look-ahead exhibits an appealing
balance between the efficiency advantages of incrementality and the achieved accuracy. Surprisingly, taking local or global decisions matters very little for the accuracy of this linear-time parser. Such a parser fits seemlessly with the currently dominant finite-state decoders for machine translation
Incrementality and the Dynamics of Routines in Dialogue
We propose a novel dual processing model of linguistic routinisation, specifically formulaic ex- pressions (from relatively fixed idioms, all the way through to looser collocational phenomena). This model is formalised using the Dynamic Syntax (DS) formal account of language processing, whereby we make a specific extension to the core DS lexical architecture to capture the dynamics of linguistic routinisation. This extension is inspired by work within cognitive science more broadly. DS has a range of attractive modelling features, such as full incrementality, as well as recent ac- counts of using resources of the core grammar for modelling a range of dialogue phenomena, all of which we deploy in our account. This leads to not only a fully incremental model of formulaic lan- guage, but further, this straightforwardly extends to routinised dialogue phenomena. We consider this approach to be a proof of concept of how interdisciplinary work within cognitive science holds out the promise of meeting challenges faced by modellers of dialogue and discourse
A syntactic language model based on incremental CCG parsing
Syntactically-enriched language models (parsers) constitute a promising component in applications such as machine translation and speech-recognition. To maintain a useful level of accuracy, existing parsers are non-incremental and must span a combinatorially growing space of possible structures as every input word is processed. This prohibits their incorporation into standard linear-time decoders. In this paper, we present an incremental, linear-time dependency parser based on Combinatory Categorial Grammar (CCG) and classification techniques. We devise a deterministic transform of CCGbank canonical derivations into incremental ones, and train our parser on this data. We discover that a cascaded, incremental version provides an appealing balance between efficiency and accuracy
Arc-Standard Spinal Parsing with Stack-LSTMs
We present a neural transition-based parser for spinal trees, a dependency
representation of constituent trees. The parser uses Stack-LSTMs that compose
constituent nodes with dependency-based derivations. In experiments, we show
that this model adapts to different styles of dependency relations, but this
choice has little effect for predicting constituent structure, suggesting that
LSTMs induce useful states by themselves.Comment: IWPT 201
- …