40 research outputs found

    Enhanced Syllable Discrimination Thresholds in Musicians

    Get PDF
    Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT) and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.Grammy FoundationWilliam F. Milton Fun

    Musical Expertise and Statistical Learning of Musical and Linguistic Structures

    Get PDF
    Adults and infants can use the statistical properties of syllable sequences to extract words from continuous speech. Here we present a review of a series of electrophysiological studies investigating (1) Speech segmentation resulting from exposure to spoken and sung sequences (2) The extraction of linguistic versus musical information from a sung sequence (3) Differences between musicians and non-musicians in both linguistic and musical dimensions. The results show that segmentation is better after exposure to sung compared to spoken material and moreover, that linguistic structure is better learned than the musical structure when using sung material. In addition, musical expertise facilitates the learning of both linguistic and musical structures. Finally, an electrophysiological approach, which directly measures brain activity, appears to be more sensitive than a behavioral one

    Music contact and language contact: A proposal for comparative research

    Get PDF
    The concept of convergence, from the study of language contact, provides a model for better understanding interactions between cognitive systems of the same type (for example, in bilingualism, subsystem instantiations of the same kind of knowledge representation and its associated processing mechanisms). For a number of reasons, musical ability is the domain that allows for the most interesting comparisons and contrasts with language in this area of research. Both cross-language and cross-musical idiom interactions show a vast array of different kinds of mutual influence, all of which are highly productive, ranging from so-called transfer effects to total replacement (attrition of the replaced subsystem). The study of music contact should also help investigators conceptualize potential structural parallels between separate mental faculties, most importantly, it would seem, between those that appear to share component competence and processing modules in common. The first part of the proposal is to determine if the comparison between the two kinds of convergence (in language and in music) is a useful way of thinking about how properties of each system are similar, analogous, different and so forth. This leads to a more general discussion about the design features of mental faculties, what might define them “narrowly,” for example

    Neurophysiological Influence of Musical Training on Speech Perception

    Get PDF
    Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one's ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss (HL), who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skills acquired through musical training for specific acoustical processes may transfer to, and thereby improve, speech perception. The neurophysiological mechanisms underlying the influence of musical training on speech processing and the extent of this influence remains a rich area to be explored. A prerequisite for such transfer is the facilitation of greater neurophysiological overlap between speech and music processing following musical training. This review first establishes a neurophysiological link between musical training and speech perception, and subsequently provides further hypotheses on the neurophysiological implications of musical training on speech perception in adverse acoustical environments and in individuals with HL

    Enhanced linguistic prosodic skills in musically trained individuals with Williams syndrome

    Full text link
    Individuals with Williams syndrome (WS) present prosodic impairments. They are also interested in musical activities. In typical development, a body of research has shown that the linguistic prosodic skills of musically trained individuals are enhanced. However, it is not known whether, in WS, musical training is also associated with enhanced prosodic performance, a question this study sought to answer. We compared the performance on linguistic prosodic tasks among seven musically trained and fourteen musically untrained individuals with WS, and typically developing peers. Among those with WS, musically trained participants outperformed their musically untrained counterparts on the perception of acoustic parameters involved in prosody, the understanding of questioning and declarative intonation, and the comprehension of prefinal contrastive stress. The results suggest that musical training facilitates prosodic performance in WS. Our findings also suggest common processing mechanisms for acoustic parameters involved in both prosody and music, and that positive music-to-language transfer effects could take place in WS. We discuss the implications of these results for intervention purposesThis research was funded by grant AP2003-5098 from the Ministry of Education and Science of the Spanish Government. The manuscript was proofread thanks to funds from the Department of Developmental and Educational Psychology (UNED

    Transfer of Training between Music and Speech: Common Processing, Attention, and Memory

    Get PDF
    After a brief historical perspective of the relationship between language and music, we review our work on transfer of training from music to speech that aimed at testing the general hypothesis that musicians should be more sensitive than non-musicians to speech sounds. In light of recent results in the literature, we argue that when long-term experience in one domain influences acoustic processing in the other domain, results can be interpreted as common acoustic processing. But when long-term experience in one domain influences the building-up of abstract and specific percepts in another domain, results are taken as evidence for transfer of training effects. Moreover, we also discuss the influence of attention and working memory on transfer effects and we highlight the usefulness of the event-related potentials method to disentangle the different processes that unfold in the course of music and speech perception. Finally, we give an overview of an on-going longitudinal project with children aimed at testing transfer effects from music to different levels and aspects of speech processing

    The impact of making music on aural perception and language skills: A research synthesis

    Get PDF
    This paper provides a synthesis of research on the relationship between music and language, drawing on evidence from neuroscience, psychology, sociology and education. It sets out why it has become necessary to justify the role of music in the school curriculum and summarizes the different methodologies adopted by researchers in the field. It considers research exploring the way that music and language are processed, including differences and commonalities; addresses the relative importance of genetics versus length of time committed to, and spent, making music; discusses theories of modularity and sensitive periods; sets out the OPERA hypothesis; critically evaluates research comparing musicians with non-musicians; and presents detailed accounts of intervention studies with children and those from deprived backgrounds, taking account of the importance of the nature of the musical training. It concludes that making music has a major impact on the development of language skills

    Theta Coherence Asymmetry In The Dorsal Stream Of Musicians Facilitates Word Learning

    Get PDF
    Word learning constitutes a human faculty which is dependent upon two anatomically distinct processing streams projecting from posterior superior temporal (pST) and inferior parietal (IP) brain regions toward the prefrontal cortex (dorsal stream) and the temporal pole (ventral stream). The ventral stream is involved in mapping sensory and phonological information onto lexical-semantic representations, whereas the dorsal stream contributes to sound-to-motor mapping, articulation, complex sequencing in the verbal domain, and to how verbal information is encoded, stored, and rehearsed from memory. In the present source-based EEG study, we evaluated functional connectivity between the IP lobe and Broca's area while musicians and non-musicians learned pseudowords presented in the form of concatenated auditory streams. Behavioral results demonstrated that musicians outperformed non-musicians, as reflected by a higher sensitivity index (d'). This behavioral superiority was paralleled by increased left-hemispheric theta coherence in the dorsal stream, whereas non-musicians showed stronger functional connectivity in the right hemisphere. Since no between-group differences were observed in a passive listening control condition nor during rest, results point to a task-specific intertwining between musical expertise, functional connectivity, and word learning

    A zene hatása a fejlődésre és lehetőségei a gyermekgyógyászatban

    Get PDF
    corecore