16 research outputs found

    Statistically based chunking of nonadjacent dependencies.

    Get PDF
    How individuals learn complex regularities in the environment and generalize them to new instances is a key question in cognitive science. Although previous investigations have advocated the idea that learning and generalizing depend upon separate processes, the same basic learning mechanisms may account for both. In language learning experiments, these mechanisms have typically been studied in isolation of broader cognitive phenomena such as memory, perception, and attention. Here, we show how learning and generalization in language is embedded in these broader theories by testing learners on their ability to chunk nonadjacent dependencies—a key structure in language but a challenge to theories that posit learning through the memorization of structure. In two studies, adult participants were trained and tested on an artificial language containing nonadjacent syllable dependencies, using a novel chunking-based serial recall task involving verbal repetition of target sequences (formed from learned strings) and scrambled foils. Participants recalled significantly more syllables, bigrams, trigrams, and nonadjacent dependencies from sequences conforming to the language’s statistics (both learned and generalized sequences). They also encoded and generalized specific nonadjacent chunk information. These results suggest that participants chunk remote dependencies and rapidly generalize this information to novel structures. The results thus provide further support for learning-based approaches to language acquisition, and link statistical learning to broader cognitive mechanisms of memory

    Statistical learning as chunking: Domain general computations in language acquisition

    No full text
    245 pagesUnderstanding the computations involved in language acquisition is a central topic in cognitive science. This dissertation presents four empirical papers that investigate the role of domain general cognitive processes in the learning of linguistic structure. The first paper describes the contribution of chunking—a basic memory process—to the phenomenon known as statistical learning, which describes learners’ ability to leverage the regularities present in the environment to form concrete representations of the input, such as finding the words in speech. The second paper extends these findings by showing how chunking can also account for the statistical learning and generalization of non-adjacent dependencies, a key feature of many linguistic systems. The third paper demonstrates that individual differences in statistically-based chunking of artificial language statistics significantly predicts sensitivity to comparable statistical structures in natural language. The final paper presents a meta-analysis of nearly 500 peer-reviewed studies on statistical learning in infants, children, and adults, tests its utility across different language properties, and proposes several methodological considerations that may benefit future experimentation. Together, these studies highlight the fundamental contribution of basic, domain general computations to language—and how they may even shape the evolution of linguistic structure over time.2023-09-1

    Stimuli

    No full text

    Prediction of Probabilistic Sequences (POPS)

    No full text
    corecore