75 research outputs found

    Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation

    Get PDF
    Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch

    Extracting the Beat: An Experience-dependent Complex Integration of Multisensory Information Involving Multiple Levels of the Nervous System

    Get PDF
    In a series of studies we have shown that movement (or vestibular stimulation) that is synchronized to every second or every third beat of a metrically ambiguous rhythm pattern biases people to perceive the meter as a march or as a waltz, respectively. Riggle (this volume) claims that we postulate an "innate", "specialized brain unit" for beat perception that is "directly" influenced by vestibular input. In fact, to the contrary, we argue that experience likely plays a large role in the development of rhythmic auditory-movement interactions, and that rhythmic processing in the brain is widely distributed and includes subcortical and cortical areas involved in sound processing and movement. Further, we argue that vestibular and auditory information are integrated at various subcortical and cortical levels along with input from other sensory modalities, and it is not clear which levels are most important for rhythm processing or, indeed, what a "direct" influence of vestibular input would mean. Finally, we argue that vestibular input to sound location mechanisms may be involved, but likely cannot explain the influence of vestibular input on the perception of auditory rhythm. This remains an empirical question for future research

    Sequencing the cortical processing of pitch-evoking stimuli using EEG analysis and source estimation

    Get PDF
    Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl\u27s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl\u27s gyrus. These results suggest that while the spectral and temporal processing of different pitchevoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. © 12 Butler and Trainor

    Please don't stop the music: A meta-analysis of the cognitive and academic benefits of instrumental musical training in childhood and adolescence

    Get PDF
    An extensive literature has investigated the impact of musical training on cognitive skills and academic achievement in children and adolescents. However, most of the studies have relied on cross-sectional designs, which makes it impossible to elucidate whether the observed differences are a consequence of the engagement in musical activities. Previous meta-analyses with longi- tudinal studies have also found inconsistent results, possibly due to their reliance on vague definitions of musical training. In addition, more evidence has appeared in recent years. The current meta-analysis investigates the impact of early programs that involve learning to play musical instruments on cognitive skills and academic achievement, as previous meta-analyses have not focused on this form of musical training. Following a systematic search, 34 indepen- dent samples of children and adolescents were included, with a total of 176 effect sizes and 5998 participants. All the studies had pre-post designs and, at least, one control group. Overall, we found a small but significant benefit (gΔ = 0.26) with short-term programs, regardless of whether they were randomized or not. In addition, a small advantage at baseline was observed in studies with self-selection (gpre = 0.28), indicating that participants who had the opportunity to select the activity consistently showed a slightly superior performance prior to the beginning of the inter- vention. Our findings support a nature and nurture approach to the relationship between instru- mental training and cognitive skills. Nevertheless, evidence from well-conducted studies is still scarce and more studies are necessary to reach firmer conclusions.Ministerio de Educación, Cultura y DeporteComunidad Autónoma de MadridMinisterio de Economía, Industria y CompetitividadUniversidad de Granada / CBU

    Use of Prosody and Information Structure in High Functioning Adults with Autism in Relation to Language Ability

    Get PDF
    Abnormal prosody is a striking feature of the speech of those with Autism spectrum disorder (ASD), but previous reports suggest large variability among those with ASD. Here we show that part of this heterogeneity can be explained by level of language functioning. We recorded semi-spontaneous but controlled conversations in adults with and without ASD and measured features related to pitch and duration to determine (1) general use of prosodic features, (2) prosodic use in relation to marking information structure, specifically, the emphasis of new information in a sentence (focus) as opposed to information already given in the conversational context (topic), and (3) the relation between prosodic use and level of language functioning. We found that, compared to typical adults, those with ASD with high language functioning generally used a larger pitch range than controls but did not mark information structure, whereas those with moderate language functioning generally used a smaller pitch range than controls but marked information structure appropriately to a large extent. Both impaired general prosodic use and impaired marking of information structure would be expected to seriously impact social communication and thereby lead to increased difficulty in personal domains, such as making and keeping friendships, and in professional domains, such as competing for employment opportunities

    Dynamic modulation of beta band cortico-muscular coupling induced by audio-visual rhythms

    Get PDF
    Human movements often spontaneously fall into synchrony with auditory and visual environmental rhythms. Related behavioral studies have shown that motor responses are automatically and unintentionally coupled with external rhythmic stimuli. However, the neurophysiological processes underlying such motor entrainment remain largely unknown. Here we investigated with electroencephalography (EEG) and electromyography (EMG) the modulation of neural and muscular activity induced by periodic audio and/or visual sequences. The sequences were presented at either 1 Hz or 2 Hz while participants maintained constant finger pressure on a force sensor. The results revealed that although there was no change of amplitude in participants’ EMG in response to the sequences, the synchronization between EMG and EEG recorded over motor areas in the beta (12–40 Hz) frequency band was dynamically modulated, with maximal coherence occurring about 100 ms before each stimulus. These modulations in beta EEG–EMG motor coherence were found for the 2 Hz audio-visual sequences, confirming at a neurophysiological level the enhancement of motor entrainment with multimodal rhythms that fall within preferred perceptual and movement frequency ranges. Our findings identify beta band cortico-muscular coupling as a potential underlying mechanism of motor entrainment, further elucidating the nature of the link between sensory and motor systems in humans

    Body sway predicts romantic interest in speed dating

    Get PDF
    Social bonding is fundamental to human society, and romantic interest involves an important type of bonding. Speed dating research paradigms offer both high external validity and experimental control for studying romantic interest in real-world settings. While previous studies focused on the effect of social and personality factors on romantic interest, the role of non-verbal interaction has been little studied in initial romantic interest, despite being commonly viewed as a crucial factor. The present study investigated whether romantic interest can be predicted by non-verbal dyadic interactive body sway, and enhanced by movement-promoting (‘groovy’) background music. Participants’ body sway trajectories were recorded during speed dating. Directional (predictive) body sway coupling, but not body sway similarity, predicted interest in a long-term relationship above and beyond rated physical attractiveness. In addition, presence of groovy background music promoted interest in meeting a dating partner again. Overall, we demonstrate that romantic interest is reflected by non-verbal body sway in dyads in a real-world dating setting. This novel approach could potentially be applied to investigate non-verbal aspects of social bonding in other dynamic interpersonal interactions such as between infants and parents and in non-verbal populations including those with communication disorders.Peer reviewe

    How Live Music Moves Us: Head Movement Differences in Audiences to Live Versus Recorded Music

    Get PDF
    A live music concert is a pleasurable social event that is among the most visceral and memorable forms of musical engagement. But what inspires listeners to attend concerts, sometimes at great expense, when they could listen to recordings at home? An iconic aspect of popular concerts is engaging with other audience members through moving to the music. Head movements, in particular, reflect emotion and have social consequences when experienced with others. Previous studies have explored the affiliative social engagement experienced among people moving together to music. But live concerts have other features that might also be important, such as that during a live performance the music unfolds in a unique and not predetermined way, potentially increasing anticipation and feelings of involvement for the audience. Being in the same space as the musicians might also be exciting. Here we controlled for simply being in an audience to examine whether factors inherent to live performance contribute to the concert experience. We used motion capture to compare head movement responses at a live album release concert featuring Canadian rock star Ian Fletcher Thornley, and at a concert without the performers where the same songs were played from the recorded album. We also examined effects of a prior connection with the performers by comparing fans and neutral-listeners, while controlling for familiarity with the songs, as the album had not yet been released. Head movements were faster during the live concert than the album-playback concert. Self-reported fans moved faster and exhibited greater levels of rhythmic entrainment than neutral-listeners. These results indicate that live music engages listeners to a greater extent than pre-recorded music and that a pre-existing admiration for the performers also leads to higher engagement
    corecore