13,189 research outputs found

    Infants segment words from songs - an EEG study

    No full text
    Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech

    Hippocampal sclerosis affects fMR-adaptation of lyrics and melodies in songs

    Get PDF
    Songs constitute a natural combination of lyrics and melodies, but it is unclear whether and how these two song components are integrated during the emergence of a memory trace. Network theories of memory suggest a prominent role of the hippocampus, together with unimodal sensory areas, in the build-up of conjunctive representations. The present study tested the modulatory influence of the hippocampus on neural adaptation to songs in lateral temporal areas. Patients with unilateral hippocampal sclerosis and healthy matched controls were presented with blocks of short songs in which lyrics and/or melodies were varied or repeated in a crossed factorial design. Neural adaptation effects were taken as correlates of incidental emergent memory traces. We hypothesized that hippocampal lesions, particularly in the left hemisphere, would weaken adaptation effects, especially the integration of lyrics and melodies. Results revealed that lateral temporal lobe regions showed weaker adaptation to repeated lyrics as well as a reduced interaction of the adaptation effects for lyrics and melodies in patients with left hippocampal sclerosis. This suggests a deficient build-up of a sensory memory trace for lyrics and a reduced integration of lyrics with melodies, compared to healthy controls. Patients with right hippocampal sclerosis showed a similar profile of results although the effects did not reach significance in this population. We highlight the finding that the integrated representation of lyrics and melodies typically shown in healthy participants is likely tied to the integrity of the left medial temporal lobe. This novel finding provides the first neuroimaging evidence for the role of the hippocampus during repetitive exposure to lyrics and melodies and their integration into a song

    Effect of musical mnemonics on learning and recall in preschool-aged children with developmental disabilities, The

    Get PDF
    2015 Spring.Includes bibliographical references.The purpose of this study was to assess whether musical mnemonics rehearsal is more effective than verbal rehearsal on immediate and delayed recall of novel information for preschool-aged children with developmental delays. Forty 3- to 5-year old children in a special education program were selected from a prescreening process as participants. Participants were randomly divided into two groups by a computerized randomizer. Group 1 received all input in spoken format and Group 2 received all input in sung format. All participants listened to a random, non-repetitive seven-digit number. Sung numbers matched the opening phrase of "Old MacDonald." For each trial, the researcher played the pre-recorded number five times. The number of correct consecutive digits was recorded both at the end of each hearing, after a one-minute distraction and following a five minute delay. Since there was evidence of skew in the serial order recall results, serial scores were compared within group and across groups using non-parametric statistical analysis. Results showed no significant difference between the music and non-music groups. Overall serial order recall scores were low, suggesting that the digit span was beyond the developmental capabilities of many of the participants. There was a significant effect of time and age, however. Paired comparisons showed significantly greater recall in Trial 4 versus Trial 1, and in Trial 5 versus delayed recall, suggesting both an increase in recall due to learning and a decrease in recall after the 5-minute delay and distraction activity. Five-year olds also performed significantly better than 3-4 year olds on delayed absolute recall and immediate serial order recall. Future research suggestions are discussed

    Incidental Vocabulary Learning through Listening to Songs

    Get PDF
    Use of songs as the vehicle for language teaching/learning has become common practice (Medina, 1993). Nevertheless, there are no experimental studies examining the potential learning gains from songs. The present study investigates incidental learning of three vocabulary knowledge dimensions (spoken-form recognition, form-meaning connection, and collocation recognition) through listening to two songs. The effects of repeated listening to a single song (1,3, or 5 times) and the relationship between frequency of exposure to the targeted vocabulary items and learning gains were also explored. Two multiple choice tests (one for each song) that each measured the different dimensions of vocabulary knowledge were used to evaluate learning. The results indicated that (a) listening to songs contribute to vocabulary learning (b) repeated listening had a positive effect on vocabulary gains and (c) frequency of exposure positively affected learning gains. The pedagogical implications are discussed in detail

    “The Song of Words” teaching multi-word units with songs

    Get PDF
    The need to integrate songs into English Language Teaching (ELT) has been recognized on numerous occasions. Song lyrics host multi-word units which learners can reuse as building blocks in their English, thereby reducing language processing time and effort, and improving their fluency as well as idiomaticity, thus bringing them closer to the native speaker norm. We report on two studies into the effectiveness of using songs for teaching multi-word units to high-school Polish learners of English. The same items were taught to two groups of EFL learners, but only one of the groups heard them in a song. Learners’ vocabulary recall was measured at three points in time relative to the teaching: before, immediately after, and a week after. The group taught with songs showed a significant recall advantage over the other group, especially when tested a week from teaching. The results suggest that songs can be an effective vehicle for teaching English multi-word units

    Commentary on Schotanus, "Singing and Accompaniment Support the Processing of Song Lyrics and Change the Lyrics' Meaning"

    Get PDF
    In this commentary, a number of problematic aspects of the studies presented in the target article are discussed. Suggestions have been made for further analyses of some of the data and for additional experimental investigations

    End-to-End Lyrics Recognition with Self-supervised Learning

    Full text link
    Lyrics recognition is an important task in music processing. Despite traditional algorithms such as the hybrid HMM- TDNN model achieving good performance, studies on applying end-to-end models and self-supervised learning (SSL) are limited. In this paper, we first establish an end-to-end baseline for lyrics recognition and then explore the performance of SSL models on lyrics recognition task. We evaluate a variety of upstream SSL models with different training methods (masked reconstruction, masked prediction, autoregressive reconstruction, and contrastive learning). Our end-to-end self-supervised models, evaluated on the DAMP music dataset, outperform the previous state-of-the-art (SOTA) system by 5.23% for the dev set and 2.4% for the test set even without a language model trained by a large corpus. Moreover, we investigate the effect of background music on the performance of self-supervised learning models and conclude that the SSL models cannot extract features efficiently in the presence of background music. Finally, we study the out-of-domain generalization ability of the SSL features considering that those models were not trained on music datasets.Comment: 4 pages, 2 figures, 3 table

    Infants' perception of sound patterns in oral language play

    Get PDF
    corecore