2,268 research outputs found

    Some word order biases from limited brain resources: A mathematical approach

    Get PDF
    In this paper, we propose a mathematical framework for studying word order optimization. The framework relies on the well-known positive correlation between cognitive cost and the Euclidean distance between the elements (e.g. words) involved in a syntactic link. We study the conditions under which a certain word order is more economical than an alternative word order by proposing a mathematical approach. We apply our methodology to two different cases: (a) the ordering of subject (S), verb (V) and object (O), and (b) the covering of a root word by a syntactic link. For the former, we find that SVO and its symmetric, OVS, are more economical than OVS, SOV, VOS and VSO at least 2/3 of the time. For the latter, we find that uncovering the root word is more economical than covering it at least 1/2 of the time. With the help of our framework, one can explain some Greenbergian universals. Our findings provide further theoretical support for the hypothesis that the limited resources of the brain introduce biases toward certain word orders. Our theoretical findings could inspire or illuminate future psycholinguistics or corpus linguistics studies.Peer ReviewedPostprint (author's final draft

    Cognitive processing, language typology, and variation

    Get PDF
    Linguistic typological preferences have often been linked to cognitive processing preferences but often without recourse to typologically relevant experiments on cognitive processing. This article reviews experimental work on the possible parallels between preferences in cognitive processing and language typology. I summarize the main theoretical accounts of the processing‐typology connection and show that typological distributions arise diachronically from preferred paths of language change, which may be affected by the degree to which alternative structures are preferred (e.g., easier) in acquisition or usage. The surveyed experimental evidence shows that considerable support exists for many linguistic universals to reflect preferences in cognitive processing. Artificial language learning experiments emerge as a promising method for researching the processing‐typology connection, as long as its limitations are taken into account. I further show that social and cultural differences in cognition may have an effect on typological distributions and that to account for this variation a multidisciplinary approach to the processing‐typology connection has to be developed. Lastly, since the body of experimental research does not adequately represent the linguistic diversity of the world's languages, it remains as an urgent task for the field to better account for this diversity in future work.Peer reviewe

    Universal linguistic inductive biases via meta-learning

    Get PDF
    How do learners acquire languages from the limited data available to them? This process must involve some inductive biases - factors that affect how a learner generalizes - but it is unclear which inductive biases can explain observed patterns in language acquisition. To facilitate computational modeling aimed at addressing this question, we introduce a framework for giving particular linguistic inductive biases to a neural network model; such a model can then be used to empirically explore the effects of those inductive biases. This framework disentangles universal inductive biases, which are encoded in the initial values of a neural network's parameters, from non-universal factors, which the neural network must learn from data in a given language. The initial state that encodes the inductive biases is found with meta-learning, a technique through which a model discovers how to acquire new languages more easily via exposure to many possible languages. By controlling the properties of the languages that are used during meta-learning, we can control the inductive biases that meta-learning imparts. We demonstrate this framework with a case study based on syllable structure. First, we specify the inductive biases that we intend to give our model, and then we translate those inductive biases into a space of languages from which a model can meta-learn. Finally, using existing analysis techniques, we verify that our approach has imparted the linguistic inductive biases that it was intended to impart.Comment: To appear in the Proceedings of the 42nd Annual Conference of the Cognitive Science Societ

    Two Case Studies in Phonological Universals: A View from Artificial Grammars

    Get PDF
    This article summarizes the results of two experiments that use artificial grammar learning in order to test proposed phonological universals. The first universal involves limits on precedence-modification in phonological representations, drawn from a typology of ludlings (language games). It is found that certain unattested precedence-modifying operations in ludlings are also dispreferred in learning in experimental studies, suggesting that the typological gap reflects a principled and universal aspect of language structure. The second universal involves differences between vowels and consonants, and in particular, the fact that phonological typology finds vowel repetition and harmony to be widespread, while consonants are more likely to dissimilate. An artificial grammar task replicates this bias in the laboratory, suggesting that its presence in natural languages is not due to historical accident but to cognitive constraints on the form of linguistic grammars

    Innovation of word order harmony across development

    Get PDF
    The tendency for languages to use harmonic word order patterns—orders that place heads in a consistent position with respect to modifiers or other dependents—has been noted since the 1960s. As with many other statistical typological tendencies, there has been debate regarding whether harmony reflects properties of human cognition or forces external to it. Recent research using laboratory language learning has shown that children and adults find harmonic patterns easier to learn than nonharmonic patterns (Culbertson & Newport, 2015; Culbertson, Smolensky, & Legendre, 2012). This supports a link between learning and typological frequency: if harmonic patterns are easier to learn, while nonharmonic patterns are more likely to be targets of change, then, all things equal, harmonic patterns will be more frequent in the world’s languages. However, these previous studies relied on variation in the input as a mechanism for change in the lab; learners were exposed to variable word order, allowing them to shift the frequencies of different orders so that harmonic patterns became more frequent. Here we teach adult and child learners languages that are consistently nonharmonic, with no variation. While adults perfectly maintain these consistently nonharmonic patterns, young child learners innovate novel orders, changing nonharmonic patterns into harmonic ones

    Explanation in typology

    Get PDF
    This volume provides an up-to-date discussion of a foundational issue that has recently taken centre stage in linguistic typology and which is relevant to the language sciences more generally: To what extent can cross-linguistic generalizations, i.e. statistical universals of linguistic structure, be explained by the diachronic sources of these structures? Everyone agrees that typological distributions are the result of complex histories, as “languages evolve into the variation states to which synchronic universals pertain” (Hawkins 1988). However, an increasingly popular line of argumentation holds that many, perhaps most, typological regularities are long-term reflections of their diachronic sources, rather than being ‘target-driven’ by overarching functional-adaptive motivations

    The language faculty that wasn't : a usage-based account of natural language recursion

    Get PDF
    In the generative tradition, the language faculty has been shrinking—perhaps to include only the mechanism of recursion. This paper argues that even this view of the language faculty is too expansive. We first argue that a language faculty is difficult to reconcile with evolutionary considerations. We then focus on recursion as a detailed case study, arguing that our ability to process recursive structure does not rely on recursion as a property of the grammar, but instead emerges gradually by piggybacking on domain-general sequence learning abilities. Evidence from genetics, comparative work on non-human primates, and cognitive neuroscience suggests that humans have evolved complex sequence learning skills, which were subsequently pressed into service to accommodate language. Constraints on sequence learning therefore have played an important role in shaping the cultural evolution of linguistic structure, including our limited abilities for processing recursive structure. Finally, we re-evaluate some of the key considerations that have often been taken to require the postulation of a language faculty

    Complementing quantitative typology with behavioral approaches: Evidence for typological universals

    Get PDF
    Two main classes of theory have been advanced to explain correlations between linguistic features like those observed by Greenberg (1963). arbitrary constraint theories argue that certain sets of features patterm together because they have a single underlying cause in the innate language faculty (e.g., the Principles and Parameters program; see Chomsky & Lasnik 1993). functional theories argue that languages are less likely to have certain combinations of properties because, although possible in principle, they are harder to learn or to process, or less suitable for efficient communication (Hockett 1960, Bates & MacWhinney 1989, Hawkins 2004, Dryer 2007, Christiansen & Chater 2008; for further discussion see Hawkins 2007 and Jaeger & Tily 2011). The failure of Dunn, Greenhill, Levinson & Gray (2011) to find systematic feature correlations using their novel computational phylogenetic methods calls into question both of these classes of theory.Alfred P. Sloan Foundation. Fellowshi

    Dependencies in language: On the causal ontology of linguistic systems

    Get PDF
    Dependency is a fundamental concept in the analysis of linguistic systems. The many if-then statements offered in typology and grammar-writing imply a causally real notion of dependency that is central to the claim being made—usually with reference to widely varying timescales and types of processes. But despite the importance of the concept of dependency in our work, its nature is seldom defined or made explicit. This book brings together experts on language, representing descriptive linguistics, language typology, functional/cognitive linguistics, cognitive science, research on gesture and other semiotic systems, developmental psychology, psycholinguistics, and linguistic anthropology to address the following question: What kinds of dependencies exist among language-related systems, and how do we define and explain them in natural, causal terms
    corecore