9 research outputs found
The Role of Music-Specific Representations When Processing Speech: Using a Musical Illusion to Elucidate Domain-Specific and -General Processes
When listening to music and language sounds, it is unclear whether adults recruit domain-specific or domain-general mechanisms to make sense of incoming sounds. Unique acoustic characteristics such as a greater reliance on rapid temporal transitions in speech relative to song may introduce misleading interpretations concerning shared and overlapping processes in the brain. By using a stimulus that is both ecologically valid and can be perceived as speech or song depending on context, the contribution of low- and high-level mechanisms may be teased apart. The stimuli employed in all experiments are auditory illusions from speech to song reported by Deutsch et al. (2003, 2011) and Tierney et al. (2012). The current experiments found that 1) non-musicians also perceive the speech-to-song illusion and experience a similar disruption of the transformation as a result of pitch transpositions. 2) The contribution of rhythmic regularity to the perceptual transformation from speech to song is unclear using several different examples of the auditory illusion, and clear order effects occur because of the within-subjects design. And finally, 3) when comparing pitch change sensitivity in a speech mode of listening and, after several repetitions, a song mode of listening, only a song mode indicated the recruitment of music-specific representations. Together these studies indicate the potential for using the auditory illusion from speech to song in future research. Also, the final experiment tentatively demonstrates a behavioral dissociation between the recruitment of mechanisms unique to musical knowledge and mechanisms unique to the processing acoustic characteristics predominant in speech or song because acoustic characteristics were held constant
Building Categories to Guide Behavior: How Humans Build and Use Auditory Category Knowledge Throughout the Lifespan
Although categorization has been studied in depth throughout development in the visual domain (e.g., Gelman & Meyer, 2011; Sloutsky 2010), there is little evidence examining how children and adults categorize everyday auditory objects (e.g., dog barks, trains, song, speech) or how category knowledge affects the way children and adults listen to these sounds during development. In two separate studies, I examined how listeners of all ages differentiated the multidimensional acoustic categories of speech and song and I determined whether listeners used category knowledge to process the sounds they encounter every day. In Experiment 1, listeners of all ages were able to categorize speech and song and categorization ability increased with age. Four- and 6-year-olds were more susceptible to the musical acoustic characteristics of ambiguous speech excerpts than 8-year-olds and adults, but all ages relied on F0 stability and average syllable duration to differentiate speech and song. Finally, 4-year-olds that were better at categorizing speech and song also had higher vocabulary scores, providing some of the first evidence that the ability to categorize speech and song may have cascading benefits for language development. Experiment 2 demonstrated the first evidence that listeners of all ages have change deafness. However, change deafness did not differ with age, even though overall sensitivity for detecting changes increased with age. Children and adults had more error for within-category changes compared to small acoustic changes, suggesting that all ages relied heavily on semantic category knowledge when detecting changes in complex scenes. These studies highlight the different roles that acoustic and semantic factors play when listeners are categorizing sounds compared to when they are using their knowledge to process sounds in complex scenes
Music as a scaffold for listening to speech: Better neural phase-locking to song than speech
© 2020 The Authors Neural activity synchronizes with the rhythmic input of many environmental signals, but the capacity of neural activity to entrain to the slow rhythms of speech is particularly important for successful communication. Compared to speech, song has greater rhythmic regularity, a more stable fundamental frequency, discrete pitch movements, and a metrical structure, this may provide a temporal framework that helps listeners neurally track information better than the rhythmically irregular rhythms of speech. The current study used EEG to examine whether entrainment to the syllable rate of linguistic utterances, as indexed by cerebro-acoustic phase coherence, was greater when listeners heard sung than spoken sentences. We assessed listeners phase-locking in both easy (no time compression) and hard (50% time-compression) utterance conditions. Adults phase-locked equally well to speech and song in the easy listening condition. However, in the time-compressed condition, phase-locking was greater for sung than spoken utterances in the theta band (3.67–5 Hz). Thus, the musical temporal and spectral characteristics of song related to better phase-locking to the slow phrasal and syllable information (4–7 Hz) in the speech stream. These results highlight the possibility of using song as a tool for improving speech processing in individuals with language processing deficits, such as dyslexia
Linking prenatal experience to the emerging musical mind
The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one’s culture begins already within the mother’s womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind
Familiarity modulates neural tracking of sung and spoken utterances
Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. However, when participant\u27s subjective ratings of perceived familiarity were used to group stimuli, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants’ neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context
Acoustic and Semantic Processing of Auditory Scenes in Children with Autism Spectrum Disorders
Purpose: Processing real-world sounds requires acoustic and higher-order semantic information. We tested the theory that individuals with autism spectrum disorder (ASD) show enhanced processing of acoustic features and impaired processing of semantic information.
Methods: We used a change deafness task that required detection of speech and non-speech auditory objects being replaced and a speech-in-noise task using spoken sentences that must be comprehended in the presence of background speech to examine the extent to which 7-15 year old children with ASD (n=27) rely on acoustic and semantic information, compared to age-matched (n=27) and IQ-matched (n=27) groups of typically developing (TD) children. Within a larger group of 7-15 year old TD children (n=105) we correlated IQ, ASD symptoms, and the use of acoustic and semantic information.
Results: Children with ASD performed worse overall at the change deafness task relative to the age-matched TD controls, but they did not differ from IQ-matched controls. All groups utilized acoustic and semantic information similarly and displayed an attentional bias towards changes that involved the human voice. Similarly, for the speech-in-noise task, age-matched--but not IQ-matched--TD controls performed better overall than the ASD group. However, all groups used semantic context to a similar degree. Among TD children, neither IQ nor the presence of ASD symptoms predict the use of acoustic or semantic information.
Conclusion: Children with and without ASD used acoustic and semantic information similarly during auditory change deafness and speech-in-noise tasks. Furthermore, deficits on complex auditory tasks may be more related to IQ than an ASD diagnosis per se
Familiarity modulates neural tracking of sung and spoken utterances
Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. However, when participant's subjective ratings of perceived familiarity were used to group stimuli, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants' neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context
Foundations of academic knowledge
This chapter assesses the acquisition of academic knowledge and skills in domains including literacy, numeracy, sciences, arts and physical education. It examines how learning trajectories arise from complex interactions between individual brain development and sociocultural environments. Teaching literacy and numeracy to all students is a goal of most school systems. While there are some fundamental skills children should grasp to succeed in these domains, the best way to support each student’s learning varies depending on their individual development, language, culture and prior knowledge. Here we explore considerations for instruction and assessment in diferent academic domains. To accommodate the fourishing of all children, fexibility must be built into education systems, which need to acknowledge the diverse ways in which children can progress through learning trajectories and demonstrate their knowledge
Recommended from our members
Globally, songs and instrumental melodies are slower, higher, and use more stable pitches than speech: a registered report
Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: 1) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and 2) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six pre-registered predictions, five were strongly supported: relative to speech, songs use 1) higher pitch, 2) slower temporal rate, and 3) more stable pitches, while both songs and speech used similar 4) pitch interval size, and 5) timbral brightness. Exploratory analyses suggest that features vary along a “musi-linguistic” continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech