34 research outputs found

    Effects of musical training and event probabilities on encoding of complex tone patterns

    Full text link
    Background: The human auditory cortex automatically encodes acoustic input from the environment and differentiates regular sound patterns from deviant ones in order to identify important, irregular events. The Mismatch Negativity (MMN) response is a neuronal marker for the detection of sounds that are unexpected, based on the encoded regularities. It is also elicited by violations of more complex regularities and musical expertise has been shown to have an effect on the processing of complex regularities. Using magnetoencephalography (MEG), we investigated the MMN response to salient or less salient deviants by varying the standard probability (70%, 50% and 35%) of a pattern oddball paradigm. To study the effects of musical expertise in the encoding of the patterns, we compared the responses of a group of non-musicians to those of musicians. Results: We observed significant MMN in all conditions, including the least salient condition (35% standards), in response to violations of the predominant tone pattern for both groups. The amplitude of MMN from the right hemisphere was influenced by the standard probability. This effect was modulated by long-term musical training: standard probability changes influenced MMN amplitude in the group of non-musicians only. Conclusion: This study indicates that pattern violations are detected automatically, even if they are of very low salience, both in non-musicians and musicians, with salience having a stronger impact on processing in the right hemisphere of non-musicians. Long-term musical training influences this encoding, in that non-musicians benefit to a greater extent from a good signal-to-noise ratio (i.e. high probability of the standard pattern), while musicians are less dependent on the salience of an acoustic environment.<br

    Musical Expertise Induces Audiovisual Integration of Abstract Congruency Rules

    Get PDF
    Perception of everyday life events relies mostly on multisensory integration. Hence, studying the neural correlates of the integration of multiple senses constitutes an important tool in understanding perception within an ecologically valid framework. The present study used magnetoencephalography in human subjects to identify the neural correlates of an audiovisual incongruency response, which is not generated due to incongruency of the unisensory physical characteristics of the stimulation but from the violation of an abstract congruency rule. The chosen rule-&quot;the higher the pitch of the tone, the higher the position of the circle&quot;-was comparable to musical reading. In parallel, plasticity effects due to long-term musical training on this response were investigated by comparing musicians to nonmusicians. The applied paradigm was based on an appropriate modification of the multifeatured oddball paradigm incorporating, within one run, deviants based on a multisensory audiovisual incongruent condition and two unisensory mismatch conditions: an auditory and a visual one. Results indicated the presence of an audiovisual incongruency response, generated mainly in frontal regions, an auditory mismatch negativity, and a visual mismatch response. Moreover, results revealed that long-term musical training generates plastic changes in frontal, temporal, and occipital areas that affect this multisensory incongruency response as well as the unisensory auditory and visual mismatch responses

    Evidence for Training-Induced Plasticity in Multisensory Brain Structures: An MEG Study

    Get PDF
    Multisensory learning and resulting neural brain plasticity have recently become a topic of renewed interest in human cognitive neuroscience. Music notation reading is an ideal stimulus to study multisensory learning, as it allows studying the integration of visual, auditory and sensorimotor information processing. The present study aimed at answering whether multisensory learning alters uni-sensory structures, interconnections of uni-sensory structures or specific multisensory areas. In a short-term piano training procedure musically naive subjects were trained to play tone sequences from visually presented patterns in a music notation-like system [Auditory-Visual-Somatosensory group (AVS)], while another group received audio-visual training only that involved viewing the patterns and attentively listening to the recordings of the AVS training sessions [Auditory-Visual group (AV)]. Training-related changes in cortical networks were assessed by pre- and post-training magnetoencephalographic (MEG) recordings of an auditory, a visual and an integrated audio-visual mismatch negativity (MMN). The two groups (AVS and AV) were differently affected by the training. The results suggest that multisensory training alters the function of multisensory structures, and not the uni-sensory ones along with their interconnections, and thus provide an answer to an important question presented by cognitive models of multisensory training

    Electromagnetic Correlates of Musical Expertise in Processing of Tone Patterns

    Get PDF
    Using magnetoencephalography (MEG), we investigated the influence of long term musical training on the processing of partly imagined tone patterns (imagery condition) compared to the same perceived patterns (perceptual condition). The magnetic counterpart of the mismatch negativity (MMNm) was recorded and compared between musicians and non-musicians in order to assess the effect of musical training on the detection of deviants to tone patterns. The results indicated a clear MMNm in the perceptual condition as well as in a simple pitch oddball (control) condition in both groups. However, there was no significant mismatch response in either group in the imagery condition despite above chance behavioral performance in the task of detecting deviant tones. The latency and the laterality of the MMNm in the perceptual condition differed significantly between groups, with an earlier MMNm in musicians, especially in the left hemisphere. In contrast the MMNm amplitudes did not differ significantly between groups. The behavioral results revealed a clear effect of long-term musical training in both experimental conditions. The obtained results represent new evidence that the processing of tone patterns is faster and more strongly lateralized in musically trained subjects, which is consistent with other findings in different paradigms of enhanced auditory neural system functioning due to long-term musical training

    Audio-tactile integration and the influence of musical training.

    Get PDF
    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training

    Effects of musical training and event probabilities on encoding of complex tone patterns

    Get PDF
    BACKGROUND: The human auditory cortex automatically encodes acoustic input from the environment and differentiates regular sound patterns from deviant ones in order to identify important, irregular events. The Mismatch Negativity (MMN) response is a neuronal marker for the detection of sounds that are unexpected, based on the encoded regularities. It is also elicited by violations of more complex regularities and musical expertise has been shown to have an effect on the processing of complex regularities. Using magnetoencephalography (MEG), we investigated the MMN response to salient or less salient deviants by varying the standard probability (70%, 50% and 35%) of a pattern oddball paradigm. To study the effects of musical expertise in the encoding of the patterns, we compared the responses of a group of non-musicians to those of musicians. RESULTS: We observed significant MMN in all conditions, including the least salient condition (35% standards), in response to violations of the predominant tone pattern for both groups. The amplitude of MMN from the right hemisphere was influenced by the standard probability. This effect was modulated by long-term musical training: standard probability changes influenced MMN amplitude in the group of non-musicians only. CONCLUSION: This study indicates that pattern violations are detected automatically, even if they are of very low salience, both in non-musicians and musicians, with salience having a stronger impact on processing in the right hemisphere of non-musicians. Long-term musical training influences this encoding, in that non-musicians benefit to a greater extent from a good signal-to-noise ratio (i.e. high probability of the standard pattern), while musicians are less dependent on the salience of an acoustic environment

    Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Get PDF
    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events

    Statistical parametric maps and grand averaged global field power of the auditory MMN response.

    No full text
    <p>Right: Statistical parametric maps of the auditory MMN response and the musicians versus non-musicians comparison as revealed by the flexible factorial model for the time window of 190 to 240 ms. Threshold: AlphaSim corrected at p<0.001 by taking into account peak voxel significance (threshold p<0.001 uncorrected) and cluster size (threshold size,>197 voxels). Left: Grand average global field power for standard (black line) and deviant (gray line) response. The gray bar indicates the time interval where the analysis was performed.</p

    Grand averaged source waveforms of the perceptual condition obtained from the individual dipole moment of MMN for musicians (A) and non-musicians (B).

    No full text
    <p>For each group the upper panels show the response to standard (black trace) and deviant stimuli (gray trace), and the lower panels show the difference waveforms (black trace) with 95% bootstrapped confidence intervals (gray shaded areas). Time windows in which the 95 percent confidence interval of the bootstrap around the averaged source waveform did not include zero values were considered to indicate significant deflections. In all panels the left hemisphere is presented on the left side and the right on the right.</p

    Statistical parametric maps and grand averaged global field power of the audio-tactile incongruency response.

    No full text
    <p>A: Right: Statistical parametric maps of the audio-tactile incongruency response and the musicians versus non-musicians comparison as revealed by the flexible factorial model for the time window of 125 to 165 ms. Threshold: AlphaSim corrected at p<0.001 by taking into account peak voxel significance (threshold p<0.001 uncorrected) and cluster size (threshold size,>259 voxels). Left: Grand average global field power for standard (black line) and deviant (grey line) response. The gray bar indicates the time interval where the analysis was performed. B: Right: Statistical parametric maps of the audio-tactile incongruency response and the musicians versus non-musicians comparison as revealed by the flexible factorial model for the time window of 190 to 240 ms. Threshold: AlphaSim corrected at p<0.001 by taking into account peak voxel significance (threshold p<0.001 uncorrected) and cluster size (threshold size,>161 voxels). Left: Grand average global field power for standard (black line) and deviant (gray line) response. The gray bar indicates the time interval where the analysis was performed.</p
    corecore