6,793 research outputs found

    Non-adjacent dependency learning in infancy, and its link to language development

    Get PDF
    To acquire language, infants must learn how to identify words and linguistic structure in speech. Statistical learning has been suggested to assist both of these tasks. However, infants’ capacity to use statistics to discover words and structure together remains unclear. Further, it is not yet known how infants’ statistical learning ability relates to their language development. We trained 17-month-old infants on an artificial language comprising non-adjacent dependencies, and examined their looking times on tasks assessing sensitivity to words and structure using an eye-tracked head-turn-preference paradigm. We measured infants’ vocabulary size using a Communicative Development Inventory (CDI) concurrently and at 19, 21, 24, 25, 27, and 30 months to relate performance to language development. Infants could segment the words from speech, demonstrated by a significant difference in looking times to words versus part-words. Infants’ segmentation performance was significantly related to their vocabulary size (receptive and expressive) both currently, and over time (receptive until 24 months, expressive until 30 months), but was not related to the rate of vocabulary growth. The data also suggest infants may have developed sensitivity to generalised structure, indicating similar statistical learning mechanisms may contribute to the discovery of words and structure in speech, but this was not related to vocabulary size

    Domain-specificity in the Acquisition of Non-adjacent Dependencies

    Get PDF
    At the forefront of investigations into the cognitive underpinnings of language acquisition is the question of domain-specificity, i.e. whether the processes involved in learning language are unique to language. Recent investigations suggest that the mechanisms employed in language learning are also involved in sequential learning of non-linguistic stimuli and are therefore domain-general. Non-adjacent dependencies are an important feature of natural languages. They describe relationships between two elements separated by an arbitrary number of intervening items, and thus potentially pose a challenge for learners. As a hallmark of natural languages they are ubiquitous, an example from English being subject-verb agreement: The socks on the floor are red. Here, learners are required to track the dependencies amongst the two underlined elements across an intervening prepositional phrase. Importantly, it has been shown that non-adjacent dependencies can be learned in the linguistic (Gómez, 2002) and non-linguistic (Creel, Newport & Aslin, 2004) domain. The majority of work presented in this thesis is based on Gómez’s (2002) artificial language learning experiment involving non-adjacent dependencies, adapted to directly compare adults’ learning in the linguistic and non-linguistic domain, in order to build a comprehensive map showing factors and conditions that enhance/ inhibit the learnability of non-adjacencies. Experiment 1 shows that the Gestalt Principle of Similarity is not a requirement for the detection of non-adjacent dependencies in the linguistic domain. Experiment 2 aims to explore the robustness of the ability to track non-adjacent regularities between linguistic elements by removing cues that indicate the correct level of analysis (i.e. interword breaks). Experiments 3 and 4 study domain-specificity in the acquisition of non-adjacencies, and show that non-adjacent dependencies are learnable in the linguistic and nonlinguistic domain, provided that the non-linguistic materials are simple and lacking internal structure. However, language is rich in internal structure: it is combinatorial on the phonemic/ orthographic level in that it recombines elements (phonemes/graphemes) to form larger units. When exposed to non-linguistic stimuli which capture this componential character of language, adult participants fail to detect the non-adjacencies. However, when exposed to non-componential non-linguistic materials, adult participants succeed in learning the non-adjacent dependencies. Experiment 5 looks at modality effects in the acquisition of non-adjacent dependencies across the linguistic and non-linguistic domain. Experiment 6 provides evidence that high familiarity with componential non-linguistic patterns does not result in the correct extraction of non-adjacencies in sequence learning tasks involving these patterns. Overall, the work presented here demonstrates that the acquisition of nonadjacent dependencies is a domain-general ability, which is guided by stimulus simplicity

    Non-adjacent dependency learning: development, domain differences, and memory

    Get PDF
    Children learn their first language simply by listening to the linguistic utterances provided by their caregivers and other speakers around them. In order to extract meaning and grammatical rules from these utterances, children must track regularities in the input, which are omnipresent in language. The ability to discover and adapt to these statistical regularities in the input is termed statistical learning and has been suggested to be one of the key mechanisms underlying language acquisition. In this thesis, I investigated a special case of statistical learning, non-adjacent dependency (NAD) learning. NADs are grammatical dependencies between distant elements in an utterance, such as is and -ing in the sentence Mary is walking. I examined which factors play a role in the development of NAD learning by illuminating this process from different stand points: the first study compares NAD learning in the linguistic and the non-linguistic domain during the earliest stages of development, at 4 months of age. This study suggests that at this age, NAD learning seems to be domain-specific to language. The second study puts a spotlight on the development of NAD learning in the linguistic domain and proposes that there may be a sensitive period for linguistic NAD learning during early childhood. Finally, the third study shows that children can not only recall newly learned NADs in a test immediately following familiarization, but also recall them after a retention period, which is critical to show more long-term learning. Overall, the findings in this thesis further illuminate how NADs, as a spotlight into language acquisition, are learned, stored in memory, and recalled

    Simultaneous online tracking of adjacent and nonadjacent dependencies in statistical learning

    Full text link

    Word learning in the first year of life

    Get PDF
    In the first part of this thesis, we ask whether 4-month-old infants can represent objects and movements after a short exposure in such a way that they recognize either a repeated object or a repeated movement when they are presented simultaneously with a new object or a new movement. If they do, we ask whether the way they observe the visual input is modified when auditory input is presented. We investigate whether infants react to the familiarization labels and to novel labels in the same manner. If the labels as well as the referents are matched for saliency, any difference should be due to processes that are not limited to sensorial perception. We hypothesize that infants will, if they map words to the objects or movements, change their looking behavior whenever they hear a familiar label, a novel label, or no label at all. In the second part of this thesis, we assess the problem of word learning from a different perspective. If infants reason about possible label-referent pairs and are able to make inferences about novel pairs, are the same processes involved in all intermodal learning? We compared the task of learning to associate auditory regularities to visual stimuli (reinforcers), and the word-learning task. We hypothesized that even if infants succeed in learning more than one label during one single event, learning the intermodal connection between auditory and visual regularities might present a more demanding task for them. The third part of this thesis addresses the role of associative learning in word learning. In the last decades, it was repeatedly suggested that co-occurrence probabilities can play an important role in word segmentation. However, the vast majority of studies test infants with artificial streams that do not resemble a natural input: most studies use words of equal length and with unambiguous syllable sequences within word, where the only point of variability is at the word boundaries (Aslin et al., 1998; Saffran, Johnson, Aslin, & Newport, 1999; Saffran et al., 1996; Thiessen et al., 2005; Thiessen & Saffran, 2003). Even if the input is modified to resemble the natural input more faithfully, the words with which infants are tested are always unambiguous \u2013 within words, each syllable predicts its adjacent syllable with the probability of 1.0 (Pelucchi, Hay, & Saffran, 2009; Thiessen et al., 2005). We therefore tested 6-month-old infants with such statistically ambiguous words. Before doing that, we also verified on a large sample of languages whether statistical information in the natural input, where the majority of the words are statistically ambiguous, is indeed useful for segmenting words. Our motivation was partly due to the fact that studies that modeled the segmentation process with a natural language input often yielded ambivalent results about the usefulness of such computation (Batchelder, 2002; Gambell & Yang, 2006; Swingley, 2005). We conclude this introduction with a small remark about the term word. It will be used throughout this thesis without questioning its descriptive value: the common-sense meaning of the term word is unambiguous enough, since all people know what are we referring to when we say or think of the term word. However, the term word is not unambiguous at all (Di Sciullo & Williams, 1987). To mention only some of the classical examples: (1) Do jump and jumped, or go and went, count as one word or as two? This example might seem all too trivial, especially in languages with weak overt morphology as English, but in some languages, each basic form of the word has tens of inflected variables. (2) A similar question arises with all the words that are morphological derivations of other words, such as evict and eviction, examine and reexamine, unhappy and happily, and so on. (3) And finally, each language contains many phrases and idioms: Does air conditioner and give up count as one word, or two? Statistical word segmentation studies in general neglect the issue of the definition of words, assuming that phrases and idioms have strong internal statistics and will therefore be selected as one word (Cutler, 2012). But because compounds or phrases are usually composed of smaller meaningful chunks, it is unclear how would infants extracts these smaller units of speech if they were using predominantly statistical information. We will address the problem of over-segmentations shortly in the third part of the thesis

    Auditory and visual sequence learning in humans and monkeys using an artificial grammar learning paradigm

    Get PDF
    Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans

    Linguistic and non-linguistic non-adjacent dependency learning in early development

    No full text
    Non-adjacent dependencies (NADs) are important building blocks for language and extracting them from the input is a fundamental part of language acquisition. Prior event-related potential (ERP) studies revealed changes in the neural signature of NAD learning between infancy and adulthood, suggesting a developmental shift in the learning route for NADs. The present study aimed to specify which brain regions are involved in this developmental shift and whether this shift extends to NAD learning in the non-linguistic domain. In two experiments, 2- and 3-year-old German-learning children were familiarized with either Italian sentences or tone sequences containing NADs and subsequently tested with NAD violations, while functional near-infrared spectroscopy (fNIRS) data were recorded. Results showed increased hemodynamic responses related to the detection of linguistic NAD violations in the left temporal, inferior frontal, and parietal regions in 2-year-old children, but not in 3-year-old children. A different developmental trajectory was found for non-linguistic NADs, where 3-year-old, but not 2-year-old children showed evidence for the detection of non-linguistic NAD violations. These results confirm a developmental shift in the NAD learning route and point to distinct mechanisms underlying NAD learning in the linguistic and the non-linguistic domain

    Brain‑correlates of processing local dependencies within a statistical learning paradigm

    Get PDF
    Statistical learning refers to the implicit mechanism of extracting regularities in our environment. Numerous studies have investigated the neural basis of statistical learning. However, how the brain responds to violations of auditory regularities based on prior (implicit) learning requires further investigation. Here, we used functional magnetic resonance imaging (fMRI) to investigate the neural correlates of processing events that are irregular based on learned local dependencies. A stream of consecutive sound triplets was presented. Unbeknown to the subjects, triplets were either (a) standard, namely triplets ending with a high probability sound or, (b) statistical deviants, namely triplets ending with a low probability sound. Participants (n = 33) underwent a learning phase outside the scanner followed by an fMRI session. Processing of statistical deviants activated a set of regions encompassing the superior temporal gyrus bilaterally, the right deep frontal operculum including lateral orbitofrontal cortex, and the right premotor cortex. Our results demonstrate that the violation of local dependencies within a statistical learning paradigm does not only engage sensory processes, but is instead reminiscent of the activation pattern during the processing of local syntactic structures in music and language, reflecting the online adaptations required for predictive coding in the context of statistical learning.publishedVersio
    • 

    corecore