4 research outputs found

    Speakers' cognitive representations of gender and number morphology shape cross-linguistic tendencies in morpheme order

    Full text link
    Languages exhibit a tremendous amount of variation in how they organise and order morphemes within words; however, regularities are also found. For example, gender and number inflectional morphology tend to appear together within a single affix, and when they appear in two separate affixes, gender marking tends to be placed closer to the stem than number. Formal theories of gender and number have been designed (in part) to explain these tendencies. However, determining whether the abstract representations hypothesised by these theories indeed drive the patterns we find cross-linguistically is difficult, if not impossible, based on the natural language data alone. In this study we use an artificial language learning paradigm to test whether the inferences learners make about the order of gender and number affixes—in the absence of any explicit information in the input—accord with formal theories of how they are represented. We test two different populations, English and Italian speakers, with substantially differ- ent gender systems in their first language. Our results suggest a clear preference for placing gender closest to the noun across these populations, across different types of gender systems, and across prefixing and suffixing morphology. These results expand the range of behavioural evidence for the role of cognitive representations in determining morpheme order

    Rational After All: Changes in Probability Matching Behaviour Across Time in Humans and Monkeys

    Full text link
    Probability matching—where subjects given probabilistic in-put respond in a way that is proportional to those input probabilities—has long been thought to be characteristic of primate performance in probability learning tasks in a variety of contexts, from decision making to the learning of linguistic variation in humans. However, such behaviour is puzzling because it is not optimal in a decision theoretic sense; the optimal strategy is to always select the alternative with the highest positive-outcome probability, known as maximising(in decision making) or regularising (in linguistic tasks). While the tendency to probability match seems to depend somewhat on the participants and the task (i.e., infants are less likely to probability match than adults, monkeys probability matchless than humans, and probability matching is less likely in linguistic tasks), existing studies suffer from a range of deficiencies which make it difficult to robustly assess these differences. In this project we present a series of experiments which systematically test the development of probability matching behaviour over time in simple decision making tasks, across species (humans and Guinea baboons), task complexity, and task domain (linguistic vs non-linguistic)

    A Naturalness Gradient Shapes the Learnability and Cross-Linguistic Distribution of Morphological Paradigms

    Full text link
    As efficient systems of communication, languages are usually expected to map meanings to forms in a one-to-one way, using for example the same affix form (e.g., -s in English) every time a particular meaning is intended (e.g., plural number), and placing affixes with the same meaning consistently in the same position (e.g., always suffixal). Forms and positional rules extending over contexts with a common meaning (e.g., plural in 1PL, 2PL, 3PL) are thus considered natural, and those extending over contexts with no consistent common meaning (e.g., 1PL and 3SG) are considered unnatural. Natural patterns are most common cross-linguistically, and most learnable in experiments; however, little is yet know about differences between unnatural classes. In this study we explore syncretism (i.e., use of the same form in different functions) and affix position in the domain of person and number agreement in verbs, both cross-linguistically and in artificial language learning experiments. Results from the two approaches and both phenomena converge in finding a gradient of (un)naturalness. Rather than a dichotomous natural/unnatural distinction, we found that both cross-linguistic frequency and learnability are proportional to the amount of shared feature values among the contexts requiring the same form or position. We argue that a cognitive bias towards similarity-based structure explains our experimental results and could be driving the patterns observed in natural languages
    corecore