7 research outputs found

    A generation-oriented workbench for performance grammar: Capturing linear order variability in German and Dutch

    Get PDF
    We describe a generation-oriented workbench for the Performance Grammar (PG) formalism, highlighting the treatment of certain word order and movement constraints in Dutch and German. PG enables a simple and uniform treatment of a heterogeneous collection of linear order phenomena in the domain of verb constructions (variably known as Cross-serial Dependencies, Verb Raising, Clause Union, Extraposition, Third Construction, Particle Hopping, etc.). The central data structures enabling this feature are clausal ā€œtopologiesā€: one-dimensional arrays associated with clauses, whose cells (ā€œslotsā€) provide landing sites for the constituents of the clause. Movement operations are enabled by unification of lateral slots of topologies at adjacent levels of the clause hierarchy. The PGW generator assists the grammar developer in testing whether the implemented syntactic knowledge allows all and only the well-formed permutations of constituents

    A generation-oriented workbench for performance grammar: Capturing linear order variability in German and Dutch

    Get PDF
    We describe a generation-oriented workbench for the Performance Grammar (PG) formalism, highlighting the treatment of certain word order and movement constraints in Dutch and German. PG enables a simple and uniform treatment of a heterogeneous collection of linear order phenomena in the domain of verb constructions (variably known as Cross-serial Dependencies, Verb Raising, Clause Union, Extraposition, Third Construction, Particle Hopping, etc.). The central data structures enabling this feature are clausal ā€œtopologiesā€: one-dimensional arrays associated with clauses, whose cells (ā€œslotsā€) provide landing sites for the constituents of the clause. Movement operations are enabled by unification of lateral slots of topologies at adjacent levels of the clause hierarchy. The PGW generator assists the grammar developer in testing whether the implemented syntactic knowledge allows all and only the well-formed permutations of constituents

    The Unification Space implemented as a localist neural net: predictions and error-tolerance in a constraint-based parser

    Get PDF
    We introduce a novel computer implementation of the Unification-Space parser (Vosse and Kempen in Cognition 75:105ā€“143, 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen and Harbusch in Verb constructions in German and Dutch. Benjamins, Amsterdam, 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least qualitatively and rudimentarily, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described

    Prolegomena to a neurocomputational architecture for human grammatical encoding and decoding

    No full text
    The study develops a neurocomputational architecture for grammatical processing in language production and language comprehension (grammatical encoding and decoding, respectively). It seeks to answer two questions. First, how is online syntactic structure formation of the complexity required by natural-language grammars possible in a fixed, preexisting neural network without the need for online creation of new connections or associations? Second, is it realistic to assume that the seemingly disparate instantiations of syntactic structure formation in grammatical encoding and grammatical decoding can run on the same neural infrastructure? This issue is prompted by accumulating experimental evidence for the hypothesis that the mechanisms for grammatical decoding overlap with those for grammatical encoding to a considerable extent, thus inviting the hypothesis of a single ā€œgrammatical coder.ā€ The paper answers both questions by providing the blueprint for a syntactic structure formation mechanism that is entirely based on prewired circuitry (except for referential processing, which relies on the rapid learning capacity of the hippocampal complex), and can subserve decoding as well as encoding tasks. The model builds on the ā€œUnification Spaceā€ model of syntactic parsing developed by Vosse & Kempen (2000, 2008, 2009). The design includes a neurocomputational mechanism for the treatment of an important class of grammatical movement phenomena

    A quantitative model of word order and movement in English, Dutch and German complement constructions

    No full text
    We present a quantitative model of word order and movement constraints that enables a simple and uniform treatment of a seemingly heterogeneous collection of linear order phenomena in English, Dutch and German complement constructions (Wh-extraction, clause union, extraposition, verb clustering, particle movement, etc.). Underlying the scheme are central assumptions of the psycholinguistically motivated Performance Grammar (PG). Here we describe this formalism in declarative terms based on typed feature unification. PG allows a homogenous treatment of both the within- and between-language variations of the ordering phenomena under discussion, which reduce to different settings of a small number of quantitative parameters

    Syntactic co-activation in bilinguals

    Get PDF
    Each human language possesses a distinctive set of syntactic rules, and early, balanced bilinguals must learn two syntactic systems. The organisation of these systems in the bilingual brain is not yet clear; do they remain autonomous, or do they interact? This thesis examines the extent to which bilingualsā€™ knowledge of syntactic rules are co-active during monolingual sentence processing. Thus, the primary objective is to assess (a) whether bilinguals co-activate idiosyncratic syntactic rules, (b) how syntactic co-activation occurs, and (c) when syntactic co-activation occurs, focusing on contextual constraints. To this end, I manipulated English sentences according to the Welsh rules of soft mutation (a morphosyntactic process that alters the initial consonant of words), such that English sentences included ā€˜mutatedā€™ (e.g. prince ļƒ  brince) or ā€˜aberrantā€™ (e.g. prince ļƒ  grince) nonwords, presented either explicitly or implicitly. In Chapters 3 and 4, syntactic co-activation led to the modulation of the phonological mismatch negativity (PMN), but only in sentences that would elicit a mutation in Welsh. Crucially, processing of explicitly processed nonwords was not influenced by lexical overlap between languages, indicating that bilinguals co-activate abstract syntactic rules during sentence processing. In Chapter 5, eye-movements were measured to determine the extent to which syntactic co-activation occurs in natural sentence reading (in which manipulated target words were implicitly processed). Syntactic co-activation manifested on later processing measures, reflected in longer reading times. Interestingly, this effect was restricted to trials in which there was lexical overlap between languages, suggesting that co-activation is sensitive to a lexical boost effect. Based on these findings, I propose a model of syntactic co-activation that is constrained by contextual demands: syntactic co-activation can occur via abstraction of syntactic rules, but may also be reliant on cross language lexico-syntactic associations during certain contexts

    Diagnostic CALL tool for Arabic learners

    Full text link
    corecore