4 research outputs found
A Computational Cognitive Model of Syntactic Priming
The psycholinguistic literature has identified two syntactic adaptation effects in language production: rapidly decaying short-term priming and long-lasting adaptation. To explain both effects, we present an ACT-R model of syntactic priming based on a wide-coverage, lexicalized syntactic theory that explains priming as facilitation of lexical access. In this model, two well-established ACT-R mechanisms, base-level learning and spreading activation, account for long-term adaptation and short-term priming, respectively. Our model simulates incremental language production and in a series of modeling studies we show that it accounts for (a) the inverse frequency interaction; (b) the absence of a decay in long-term priming; and (c) the cumulativity of long-term adaptation. The model also explains the lexical boost effect and the fact that it only applies to short-term priming. We also present corpus data that verifies a prediction of the model, i.e., that the lexical boost affects all lexical material, rather than just heads. Keywords: syntactic priming, adaptation, cognitive architectures, ACT-R, categorial grammar, incrementality
The Computational Analysis of the Syntax and Interpretation of Free Word Order in Turkish
In this dissertation, I examine a language with āfreeā word order, specifically Turkish, in order to develop a formalism that can capture the syntax and the context-dependent interpretation of āfreeā word order within a computational framework. In āfreeā word order languages, word order is used to convey distinctions in meaning that are not captured by traditional truth-conditional semantics. The word order indicates the āinformation structureā, e.g. what is the ātopicā and the āfocusā of the sentence. The context-appropriate use of āfreeā word order is of considerable importance in developing practical applications in natural language interpretation, generation, and machine translation.
I develop a formalism called Multiset-CCG, an extension of Combinatory Categorial Grammars, CCGs, (Ades/Steedman 1982, Steedman 1985), and demonstrate its advantages in an implementation of a data-base query system that interprets Turkish questions and generates answers with contextually appropriate word orders. Multiset-CCG is a context-sensitive and polynomially parsable grammar that captures the formal and descriptive properties of āfreeā word order and restrictions on word order in simple and complex sentences (with discontinuous constituents and long distance dependencies). Multiset-CCG captures the context-dependent meaning of word order in Turkish by compositionally deriving the predicate-argument structure and the information structure of a sentence in parallel. The advantages of using such a formalism are that it is computationally attractive and that it provides a compositional and flexible surface structure that allows syntactic constituents to correspond to information structure constituents. A formalism that integrates information structure and syntax such as Multiset-CCG is essential to the computational tasks of interpreting and generating sentences with contextually appropriate word orders in āfreeā word order languages
The Surface Compositional Semantics of English Intonation
This paper proposes a syntax and a semantics for intonation in English and some related languages. The semantics is āsurface-compositionalā, in the sense that syntactic derivation constructs information-structural logical form monotonically, without rules of structural revision, and without autonomous rules of āfocus projection. ā This is made possible by the generalized notion of syntactic constituency afforded by Combinatory Categorial Grammar (CCG)āin particular, the fact that its rules are restricted to string-adjacent type-driven combination. In this way, the grammar unites intonation structure and information structure with surface-syntactic derivational structure and Montague-style compositional semantics, even when they deviate radically from traditional surface structure. The paper revises and extends earlier CCG-based accounts of intonational semantics, grounding hitherto informal notions like ātheme ā and ārheme ā (a.k.a. ātopic ā and ācomment,ā āpresupposition ā and āfocus, ā etc.) and ābackground ā and ācontrast ā (a.k.a. āgiven ā and ānewā, āfocusā, etc.) in a logic of speaker/hearer supposition and update, using a version of Roothās Alternative Semantics. A CCG grammar fragment is defined which constrains language-specific intonation and its interpretation more narrowly than previous attempts. ā 1. INTRODUCTION. Th