971 research outputs found

    Wh-copying, phases, and successive cyclicity

    Get PDF

    Geometric representations for minimalist grammars

    Full text link
    We reformulate minimalist grammars as partial functions on term algebras for strings and trees. Using filler/role bindings and tensor product representations, we construct homomorphisms for these data structures into geometric vector spaces. We prove that the structure-building functions as well as simple processors for minimalist languages can be realized by piecewise linear operators in representation space. We also propose harmony, i.e. the distance of an intermediate processing step from the final well-formed state in representation space, as a measure of processing complexity. Finally, we illustrate our findings by means of two particular arithmetic and fractal representations.Comment: 43 pages, 4 figure

    Relative clauses as a benchmark for Minimalist parsing

    Full text link

    Implicit learning of recursive context-free grammars

    Get PDF
    Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning

    The Radical Unacceptability Hypothesis: Accounting for Unacceptability without Universal Constraints

    Get PDF
    The Radical Unacceptability Hypothesis (RUH) has been proposed as a way of explaining the unacceptability of extraction from islands and frozen structures. This hypothesis explicitly assumes a distinction between unacceptability due to violations of local well-formedness conditions—conditions on constituency, constituent order, and morphological form—and unacceptability due to extra-grammatical factors. We explore the RUH with respect to classical islands, and extend it to a broader range of phenomena, including freezing, A′ chain interactions, zero-relative clauses, topic islands, weak crossover, extraction from subjects and parasitic gaps, and sensitivity to information structure. The picture that emerges is consistent with the RUH, and suggests more generally that the unacceptability of extraction from otherwise well-formed configurations reflects non-syntactic factors, not principles of grammar.Peer Reviewe

    On Folding and Twisting (and whatknot): towards a characterization of workspaces in syntax

    Full text link
    Syntactic theory has traditionally adopted a constructivist approach, in which a set of atomic elements are manipulated by combinatory operations to yield derived, complex elements. Syntactic structure is thus seen as the result or discrete recursive combinatorics over lexical items which get assembled into phrases, which are themselves combined to form sentences. This view is common to European and American structuralism (e.g., Benveniste, 1971; Hockett, 1958) and different incarnations of generative grammar, transformational and non-transformational (Chomsky, 1956, 1995; and Kaplan & Bresnan, 1982; Gazdar, 1982). Since at least Uriagereka (2002), there has been some attention paid to the fact that syntactic operations must apply somewhere, particularly when copying and movement operations are considered. Contemporary syntactic theory has thus somewhat acknowledged the importance of formalizing aspects of the spaces in which elements are manipulated, but it is still a vastly underexplored area. In this paper we explore the consequences of conceptualizing syntax as a set of topological operations applying over spaces rather than over discrete elements. We argue that there are empirical advantages in such a view for the treatment of long-distance dependencies and cross-derivational dependencies: constraints on possible configurations emerge from the dynamics of the system.Comment: Manuscript. Do not cite without permission. Comments welcom

    Natural Language Syntax Complies with the Free-Energy Principle

    Full text link
    Natural language syntax yields an unbounded array of hierarchically structured expressions. We claim that these are used in the service of active inference in accord with the free-energy principle (FEP). While conceptual advances alongside modelling and simulation work have attempted to connect speech segmentation and linguistic communication with the FEP, we extend this program to the underlying computations responsible for generating syntactic objects. We argue that recently proposed principles of economy in language design - such as "minimal search" criteria from theoretical syntax - adhere to the FEP. This affords a greater degree of explanatory power to the FEP - with respect to higher language functions - and offers linguistics a grounding in first principles with respect to computability. We show how both tree-geometric depth and a Kolmogorov complexity estimate (recruiting a Lempel-Ziv compression algorithm) can be used to accurately predict legal operations on syntactic workspaces, directly in line with formulations of variational free energy minimization. This is used to motivate a general principle of language design that we term Turing-Chomsky Compression (TCC). We use TCC to align concerns of linguists with the normative account of self-organization furnished by the FEP, by marshalling evidence from theoretical linguistics and psycholinguistics to ground core principles of efficient syntactic computation within active inference

    Syntactic Competence and Processing: Constraints on Long-distance A-bar Dependencies in Bilinguals.

    Full text link
    This dissertation investigates the syntactic competence and processing of A-bar dependencies by Sinhala native speakers in their L2 English. The specific focus is on wh-dependencies (wh-questions and relative clauses) and topicalization, given that these phenomena are syntactically distinct across the two languages. Presenting novel results from a series of psycholinguistic experiments, the study reevaluates the predictive and explanatory power of two recent hypotheses in generative SLA —the Feature Interpretability Hypothesis (FIH) and the Shallow Structure Hypothesis (SSH)— which concern the kind of ultimate attainment possible in post-childhood L2 acquisition, regarding syntactic competence and real-time processing. The first part of the dissertation is a re-evaluation of the FIH, in particular the claim that post-childhood L2 learners fail to develop native-like underlying mental representations for the target language syntax because their access to UG is restricted in the domain of uninterpretable syntactic features. Two experiments (Grammaticality Judgment and Truth-value Judgment tasks) were conducted with thirty-eight Sinhala L1/English L2 speakers and a control group of thirty-one English monolinguals. Our results are consistent with the hypothesis that highly proficient L2 speakers are capable of acquiring native-like syntactic competence even in those domains where L2 acquisition involves the mastery of a new uninterpretable feature. The fact that these L2ers have been able to overcome a poverty of the stimulus problem, imposed by both their L1 syntax and L2 input, implies that full access to UG is available in post-childhood L2 acquisition, against the predictions of the FIH. The second part of the dissertation re-evaluates a tenet of the Shallow Structure Hypothesis that in real-time processing of the target language, L2 speakers fail to build full-fledged syntactic representations, but instead over-rely on non-syntactic information (lexical semantics and contextual cues), unlike native speakers of a target language. Our results from two Self-paced Reading experiments with thirty-six bilinguals and thirty-nine monolinguals support the conclusion that advanced L2 learners are capable of building complex native-like syntactic representations during their real-time comprehension of the target language. Thus, the study concludes that neither the FIH nor the SSH can be maintained in the experimental L2 acquisition domain investigated in this dissertation.PhDLinguisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116655/1/sujeewa_1.pd

    Parameters of Cross-linguistic Variation in Expectation-based Minimalist Grammars (e-MGs)

    Get PDF
    The fact that Parsing and Generation share the same grammatical knowledge is often considered the null hypothesis (Momma and Phillips 2018) but very few algorithms can take advantage of a cognitively plausible incremental procedure that operates roughly in the way words are produced and understood in real time. This is especially difficult if we consider cross-linguistic variation that has a clear impact on word order. In this paper, I present one such formalism, dubbed Expectation-based Minimalist Grammar (e-MG), that qualifies as a simplified version of the (Conflated) Minimalist Grammars, (C)MGs (Stabler 1997, 2011, 2013), and Phase-based Minimalist Grammars, PMGs (Chesi 2005, 2007; Stabler 2011). The crucial simplification consists of driving structure building only using lexically encoded categorial top-down expectations. The commitment to the top-down procedure (in e-MGs and PMGs, as opposed to (C)MGs, ) will be crucial to capture a relevant set of empirical asymmetries in a parameterized cross-linguistic perspective which represents the least common denominator of structure building in both Parsing and Generation
    • …
    corecore