128 research outputs found
Implicit learning of recursive context-free grammars
Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning
experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have
not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing
features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured
the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both
distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes
even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between
individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for
tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex
context-free structures, which model some features of natural languages. They support the relevance of artificial grammar
learning for probing mechanisms of language learning and challenge existing theories and computational models of
implicit learning
Recommended from our members
The Varying Roles of Morphosyntax in Memory and Sentence Processing: Retrieval and Encoding Interference in Brazilian Portuguese
Cue-based retrieval models have largely been adopted as a description of how linguistic content is retrieved from memory. Under this framework, a retrieval cue is projected at the site of a dependency and matched with its target using a parallel matching procedure (e.g., Van Dyke and Lewis, 2003). Although this is a highly efficient mechanism, retrieval difficulties occur when there are multiple items stored in memory that serve as potential matches for the retrieval cue(s), which is known as similarity-based interference (SBI). Several studies have demonstrated that a wide variety of linguistic information can generate SBI effects, but the theory of what serves as a retrieval cue is still relatively unknown (Van Dyke and Johns, 2012). Moreover, recent empirical evidence has proposed the similarity-based interference can arise from another source: the encoding mechanism (e.g., Villata et al., 2018). Three hypotheses are addressed regarding three potential retrieval mechanisms: (1) a retrieval mechanism that only relies on cues relevant to the dependency being resolved, (2) one that is sensitive to all of the features overlapping between a target and distractor(s), or (3) a mechanism that is primarily sensitive to relevant features but produces additive interference effects for irrelevant features. Moreover, a fourth hypothesis investigates if similarity-based interference also arises from the encoding mechanism. In an attempt to disentangle whether sentence processing disruptions occur as a result of retrieval mechanism (1) + encoding interference or due to one of the other mechanisms, 7 self-paced reading experiments were conducted on Brazilian Portuguese. In all of the studies, number was a relevant feature for the resolution of the grammatical dependency (subject-verb dependency in relative clauses or wh-remnant-correlate pairing in sluices) and gender features varied in their relevance. The rationale behind using these dependencies and features was to test whether syntactically relevant features produced stronger interference effects than irrelevant features and to propose why these results differed. Any findings that showed that irrelevant feature (gender) matches caused reading time slowdowns or decreased comprehension question accuracy before the retrieval site were interpreted as encoding interference.Although results vary across studies, the findings in this thesis provide the most support for a combination of retrieval (mechanism 1) and encoding interference. Although the other two retrieval mechanisms cannot be completely ruled out at this time, the evidence that gender produces earlier and weaker effects reminiscent of encoding interference and that number produced interference reflective of retrieval interference are novel
Statistical Deep parsing for spanish
This document presents the development of a statistical HPSG parser for Spanish. HPSG is a deep linguistic formalism that combines syntactic and semanticinformation in the same representation, and is capable of elegantly modelingmany linguistic phenomena. Our research consists in the following steps: design of the HPSG grammar, construction of the corpus, implementation of theparsing algorithms, and evaluation of the parsers performance. We created a simple yet powerful HPSG grammar for Spanish that modelsmorphosyntactic information of words, syntactic combinatorial valence, and semantic argument structures in its lexical entries. The grammar uses thirteenvery broad rules for attaching specifiers, complements, modifiers, clitics, relative clauses and punctuation symbols, and for modeling coordinations. In asimplification from standard HPSG, the only type of long range dependency wemodel is the relative clause that modifies a noun phrase, and we use semanticrole labeling as our semantic representation. We transformed the Spanish AnCora corpus using a semi-automatic processand analyzed it using our grammar implementation, creating a Spanish HPSGcorpus of 517,237 words in 17,328 sentences (all of AnCora). We implemented several statistical parsing algorithms and trained them overthis corpus. The implemented strategies are: a bottom-up baseline using bi-lexical comparisons or a multilayer perceptron; a CKY approach that uses theresults of a supertagger; and a top-down approach that encodes word sequencesusing a LSTM network. We evaluated the performance of the implemented parsers and compared them with each other and against other existing Spanish parsers. Our LSTM top-down approach seems to be the best performing parser over our test data, obtaining the highest scores (compared to our strategies and also to externalparsers) according to constituency metrics (87.57 unlabeled F1, 82.06 labeled F1), dependency metrics (91.32 UAS, 88.96 LAS), and SRL (87.68 unlabeled,80.66 labeled), but we must take in consideration that the comparison against the external parsers might be noisy due to the post-processing we needed to do in order to adapt them to our format. We also defined a set of metrics to evaluate the identification of some particular language phenomena, and the LSTM top-down parser out performed the baselines in almost all of these metrics as well.Este documento presenta el desarrollo de un parser HPSG estadÃstico para el español. HPSG es un formalismo lingüÃstico profundo que combina información sintáctica y semántica en sus representaciones, y es capaz de modelar elegantemente una buena cantidad de fenómenos lingüÃsticos. Nuestra investigación se compone de los siguiente pasos: diseño de la gramática HPSG, construcción del corpus, implementación de los algoritmos de parsing y evaluación de la performance de los parsers. Diseñamos una gramática HPSG para el español simple y a la vez poderosa, que modela en sus entradas léxicas la información morfosintáctica de las palabras, la valencia combinatoria sintáctica y la estructura argumental semántica. La gramática utiliza trece reglas genéricas para adjuntar especificadores, complementos, clÃticos, cláusulas relativas y sÃmbolos de puntuación, y también para modelar coordinaciones. Como simplificación de la teorÃa HPSG estándar, el único tipo de dependencia de largo alcance que modelamos son las cláusulas relativas que modifican sintagmas nominales, y utilizamos etiquetado de roles semánticos como representación semántica. Transformamos el corpus AnCora en español utilizando un proceso semiautomático y lo analizamos mediante nuestra implementación de la gramática, para crear un corpus HPSG en español de 517,237 palabras en 17,328 oraciones (todo el contenido de AnCora). Implementamos varios algoritmos de parsing estadÃstico entrenados sobre este corpus. En particular, tenÃamos como objetivo probar enfoques basados en redes neuronales. Las estrategias implementadas son: una lÃnea base bottom-up que utiliza comparaciones bi-léxicas o un perceptrón multicapa; un enfoque tipo CKY que utiliza los resultados de un supertagger; y un enfoque top-down que codifica las secuencias de palabras mediante redes tipo LSTM. Evaluamos la performance de los parsers implementados y los comparamos entre sà y con un conjunto de parsers existententes para el español. Nuestro enfoque LSTM top-down parece ser el que tiene mejor desempeño para nuestro conjunto de test, obteniendo los mejores puntajes (comparado con nuestras estrategias y también con parsers externos) en cuanto a métricas de constituyentes (87.57 F1 no etiquetada, 82.06 F1 etiquetada), métricas de dependencias (91.32 UAS, 88.96 LAS), y SRL (87.68 no etiquetada, 80.66 etiquetada), pero debemos tener en cuenta que la comparación con parsers externos puede ser ruidosa debido al post procesamiento realizado para adaptarlos a nuestro formato. También definimos un conjunto de métricas para evaluar la identificación de algunos fenómenos particulares del lenguaje, y el parser LSTM top-down obtuvo mejores resultados que las baselines para casi todas estas métricas
Cue-based reflexive reference resolution: Evidence from Korean reflexive caki
This dissertation aims to reveal cognitive mechanisms and factors that underlie the reflexive dependency formation. In recent years, a lot of attention has been paid to the question of how our mind works in building linguistic dependencies (including an antecedent-reflexive dependency) because relevant research has proved promising and illuminating in regard to the properties (e.g., system architecture, computational algorithms, etc.) of human language processor and its close connection with other cognitive functions such as memory (Lewis & Vasishth, 2005; Lewis, Vasishth, & Van Dyke, 2006; McElree, 2000; McElree, Foraker, & Dyer, 2003; Van Dyke & Johns, 2012; Wagers, Lau, & Phillips, 2009). Building upon this line of research, the present dissertation provides empirical evidence to show that the parser can directly access potential antecedents (stored in memory) in forming an antecedent-reflexive dependency, using various linguistic cues and contextual knowledge available at the reflexive.
In order to make this claim, this dissertation examines the Korean mono-morphemic reflexive caki ‘self’ (also known as a long-distance anaphor), using acceptability judgment and self-paced reading methodologies, and asks (i) what linguistic factors guide its reference resolution and (ii) how they are applied to cognitive processes for memory retrieval and phrase structure building.
A series of acceptability judgment experiments (Experiments 1 through 5) show that caki has a very robust referential bias: it strongly prefers a subject antecedent. Moreover, it is established that syntactic constraints (e.g., binding constraints) are not the only available source of information during caki’s reference resolution. Indeed, various non-syntactic sources of information (or cues) can also determine caki’s reference resolution. Three self-paced reading experiments (Experiments 6 through 8) provide evidence compatible with the direct-access content-addressable memory retrieval model (Lewis & Vasishth, 2005; Lewis et al., 2006; McElree, 2000; Van Dyke & McElree, 2011)
Based on these experimental findings, I present an explanation of why caki preferentially forms a dependency with a subject antecedent. I argue that caki’s subject antecedent bias is driven both externally (i.e., syntactic prominence of a grammatical subject and first-mention advantage) and internally (i.e., frequency-based prediction on caki-subject dependency relation). Finally, I showcase how a referential dependency between caki and a potential antecedent can be constructed by the cue-based retrieval parser (Lewis et al., 2006; Van Dyke & Lewis, 2003)
Grammatical theory: From transformational grammar to constraint-based approaches. Second revised and extended edition.
This book is superseded by the third edition, available at http://langsci-press.org/catalog/book/255.
This book introduces formal grammar theories that play a role in current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head-​Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar). The key assumptions are explained and it is shown how the respective theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language.
The second part of the book compares these approaches with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis, which assumes that humans posses genetically determined innate language-specific knowledge, is critically examined and alternative models of language acquisition are discussed. The second part then addresses controversial issues of current theory building such as the question of flat or binary branching structures being more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses suggested in the respective frameworks are often translatable into each other. The book closes with a chapter showing how properties common to all languages or to certain classes of languages can be captured.
The book is a translation of the German book Grammatiktheorie, which was published by Stauffenburg in 2010. The following quotes are taken from reviews:
With this critical yet fair reflection on various grammatical theories, Müller fills what was a major gap in the literature. Karen Lehmann, Zeitschrift für RezenÂsioÂnen zur gerÂmanÂisÂtisÂchen SprachÂwisÂsenschaft, 2012
Stefan Müller’s recent introductory textbook, GramÂmatikÂtheÂoÂrie, is an astonishingly comprehensive and insightful survey for beginning students of the present state of syntactic theory. Wolfgang Sternefeld und Frank Richter, Zeitschrift für SprachÂwissenÂschaft, 2012
This is the kind of work that has been sought after for a while [...] The impartial and objective discussion offered by the author is particularly refreshing. Werner Abraham, Germanistik, 2012
This book is a new edition of http://langsci-press.org/catalog/book/25
Superseded: Grammatical theory: From transformational grammar to constraint-based approaches. Second revised and extended edition.
This book is superseded by the third edition, available at http://langsci-press.org/catalog/book/255.
This book introduces formal grammar theories that play a role in current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head-​Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar). The key assumptions are explained and it is shown how the respective theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language.
The second part of the book compares these approaches with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis, which assumes that humans posses genetically determined innate language-specific knowledge, is critically examined and alternative models of language acquisition are discussed. The second part then addresses controversial issues of current theory building such as the question of flat or binary branching structures being more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses suggested in the respective frameworks are often translatable into each other. The book closes with a chapter showing how properties common to all languages or to certain classes of languages can be captured.
The book is a translation of the German book Grammatiktheorie, which was published by Stauffenburg in 2010. The following quotes are taken from reviews:
With this critical yet fair reflection on various grammatical theories, Müller fills what was a major gap in the literature. Karen Lehmann, Zeitschrift für RezenÂsioÂnen zur gerÂmanÂisÂtisÂchen SprachÂwisÂsenschaft, 2012
Stefan Müller’s recent introductory textbook, GramÂmatikÂtheÂoÂrie, is an astonishingly comprehensive and insightful survey for beginning students of the present state of syntactic theory. Wolfgang Sternefeld und Frank Richter, Zeitschrift für SprachÂwissenÂschaft, 2012
This is the kind of work that has been sought after for a while [...] The impartial and objective discussion offered by the author is particularly refreshing. Werner Abraham, Germanistik, 2012
This book is a new edition of http://langsci-press.org/catalog/book/25
Superseded: Grammatical theory: From transformational grammar to constraint-based approaches. Second revised and extended edition.
This book is superseded by the third edition, available at http://langsci-press.org/catalog/book/255.
This book introduces formal grammar theories that play a role in current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head-​Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar). The key assumptions are explained and it is shown how the respective theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language.
The second part of the book compares these approaches with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis, which assumes that humans posses genetically determined innate language-specific knowledge, is critically examined and alternative models of language acquisition are discussed. The second part then addresses controversial issues of current theory building such as the question of flat or binary branching structures being more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses suggested in the respective frameworks are often translatable into each other. The book closes with a chapter showing how properties common to all languages or to certain classes of languages can be captured.
The book is a translation of the German book Grammatiktheorie, which was published by Stauffenburg in 2010. The following quotes are taken from reviews:
With this critical yet fair reflection on various grammatical theories, Müller fills what was a major gap in the literature. Karen Lehmann, Zeitschrift für RezenÂsioÂnen zur gerÂmanÂisÂtisÂchen SprachÂwisÂsenschaft, 2012
Stefan Müller’s recent introductory textbook, GramÂmatikÂtheÂoÂrie, is an astonishingly comprehensive and insightful survey for beginning students of the present state of syntactic theory. Wolfgang Sternefeld und Frank Richter, Zeitschrift für SprachÂwissenÂschaft, 2012
This is the kind of work that has been sought after for a while [...] The impartial and objective discussion offered by the author is particularly refreshing. Werner Abraham, Germanistik, 2012
This book is a new edition of http://langsci-press.org/catalog/book/25
Superseded: Grammatical theory: From transformational grammar to constraint-based approaches. Second revised and extended edition.
This book is superseded by the third edition, available at http://langsci-press.org/catalog/book/255.
This book introduces formal grammar theories that play a role in current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head-​Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar). The key assumptions are explained and it is shown how the respective theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language.
The second part of the book compares these approaches with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis, which assumes that humans posses genetically determined innate language-specific knowledge, is critically examined and alternative models of language acquisition are discussed. The second part then addresses controversial issues of current theory building such as the question of flat or binary branching structures being more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses suggested in the respective frameworks are often translatable into each other. The book closes with a chapter showing how properties common to all languages or to certain classes of languages can be captured.
The book is a translation of the German book Grammatiktheorie, which was published by Stauffenburg in 2010. The following quotes are taken from reviews:
With this critical yet fair reflection on various grammatical theories, Müller fills what was a major gap in the literature. Karen Lehmann, Zeitschrift für RezenÂsioÂnen zur gerÂmanÂisÂtisÂchen SprachÂwisÂsenschaft, 2012
Stefan Müller’s recent introductory textbook, GramÂmatikÂtheÂoÂrie, is an astonishingly comprehensive and insightful survey for beginning students of the present state of syntactic theory. Wolfgang Sternefeld und Frank Richter, Zeitschrift für SprachÂwissenÂschaft, 2012
This is the kind of work that has been sought after for a while [...] The impartial and objective discussion offered by the author is particularly refreshing. Werner Abraham, Germanistik, 2012
This book is a new edition of http://langsci-press.org/catalog/book/25
- …