465 research outputs found

    Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation

    Get PDF
    To investigate the neural substrates that underlie spontaneous musical performance, we examined improvisation in professional jazz pianists using functional MRI. By employing two paradigms that differed widely in musical complexity, we found that improvisation (compared to production of over-learned musical sequences) was consistently characterized by a dissociated pattern of activity in the prefrontal cortex: extensive deactivation of dorsolateral prefrontal and lateral orbital regions with focal activation of the medial prefrontal (frontal polar) cortex. Such a pattern may reflect a combination of psychological processes required for spontaneous improvisation, in which internally motivated, stimulus-independent behaviors unfold in the absence of central processes that typically mediate self-monitoring and conscious volitional control of ongoing performance. Changes in prefrontal activity during improvisation were accompanied by widespread activation of neocortical sensorimotor areas (that mediate the organization and execution of musical performance) as well as deactivation of limbic structures (that regulate motivation and emotional tone). This distributed neural pattern may provide a cognitive context that enables the emergence of spontaneous creative activity

    Discrimination of Timbre in Early Auditory Responses of the Human Brain

    Get PDF
    The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG).Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1) – testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres.Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre

    Electromagnetic Correlates of Musical Expertise in Processing of Tone Patterns

    Get PDF
    Using magnetoencephalography (MEG), we investigated the influence of long term musical training on the processing of partly imagined tone patterns (imagery condition) compared to the same perceived patterns (perceptual condition). The magnetic counterpart of the mismatch negativity (MMNm) was recorded and compared between musicians and non-musicians in order to assess the effect of musical training on the detection of deviants to tone patterns. The results indicated a clear MMNm in the perceptual condition as well as in a simple pitch oddball (control) condition in both groups. However, there was no significant mismatch response in either group in the imagery condition despite above chance behavioral performance in the task of detecting deviant tones. The latency and the laterality of the MMNm in the perceptual condition differed significantly between groups, with an earlier MMNm in musicians, especially in the left hemisphere. In contrast the MMNm amplitudes did not differ significantly between groups. The behavioral results revealed a clear effect of long-term musical training in both experimental conditions. The obtained results represent new evidence that the processing of tone patterns is faster and more strongly lateralized in musically trained subjects, which is consistent with other findings in different paradigms of enhanced auditory neural system functioning due to long-term musical training

    Synchronization to a bouncing ball with a realistic motion trajectory

    Get PDF
    Daily music experience involves synchronizing movements in time with a perceived periodic beat. It has been established for over a century that beat synchronization is less stable for the visual than for the auditory modality. This auditory advantage of beat synchronization gives rise to the hypotheses that the neural and evolutionary mechanisms underlying beat synchronization are modality-specific. Here, however, we found that synchronization to a periodically bouncing ball with a realistic motion trajectory was not less stable than synchronization to an auditory metronome. This finding challenges the auditory advantage of beat synchronization, and has important implications for the understanding of the biological substrates of beat synchronization

    Vocal Accuracy and Neural Plasticity Following Micromelody-Discrimination Training

    Get PDF
    Recent behavioral studies report correlational evidence to suggest that non-musicians with good pitch discrimination sing more accurately than those with poorer auditory skills. However, other studies have reported a dissociation between perceptual and vocal production skills. In order to elucidate the relationship between auditory discrimination skills and vocal accuracy, we administered an auditory-discrimination training paradigm to a group of non-musicians to determine whether training-enhanced auditory discrimination would specifically result in improved vocal accuracy.We utilized micromelodies (i.e., melodies with seven different interval scales, each smaller than a semitone) as the main stimuli for auditory discrimination training and testing, and we used single-note and melodic singing tasks to assess vocal accuracy in two groups of non-musicians (experimental and control). To determine if any training-induced improvements in vocal accuracy would be accompanied by related modulations in cortical activity during singing, the experimental group of non-musicians also performed the singing tasks while undergoing functional magnetic resonance imaging (fMRI). Following training, the experimental group exhibited significant enhancements in micromelody discrimination compared to controls. However, we did not observe a correlated improvement in vocal accuracy during single-note or melodic singing, nor did we detect any training-induced changes in activity within brain regions associated with singing.Given the observations from our auditory training regimen, we therefore conclude that perceptual discrimination training alone is not sufficient to improve vocal accuracy in non-musicians, supporting the suggested dissociation between auditory perception and vocal production

    The effect of long-term unilateral deafness on the activation pattern in the auditory cortices of French-native speakers: influence of deafness side

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In normal-hearing subjects, monaural stimulation produces a normal pattern of asynchrony and asymmetry over the auditory cortices in favour of the contralateral temporal lobe. While late onset unilateral deafness has been reported to change this pattern, the exact influence of the side of deafness on central auditory plasticity still remains unclear. The present study aimed at assessing whether left-sided and right-sided deafness had differential effects on the characteristics of neurophysiological responses over auditory areas. Eighteen unilaterally deaf and 16 normal hearing right-handed subjects participated. All unilaterally deaf subjects had post-lingual deafness. Long latency auditory evoked potentials (late-AEPs) were elicited by two types of stimuli, non-speech (1 kHz tone-burst) and speech-sounds (voiceless syllable/pa/) delivered to the intact ear at 50 dB SL. The latencies and amplitudes of the early exogenous components (N100 and P150) were measured using temporal scalp electrodes.</p> <p>Results</p> <p>Subjects with left-sided deafness showed major neurophysiological changes, in the form of a more symmetrical activation pattern over auditory areas in response to non-speech sound and even a significant reversal of the activation pattern in favour of the cortex ipsilateral to the stimulation in response to speech sound. This was observed not only for AEP amplitudes but also for AEP time course. In contrast, no significant changes were reported for late-AEP responses in subjects with right-sided deafness.</p> <p>Conclusion</p> <p>The results show that cortical reorganization induced by unilateral deafness mainly occurs in subjects with left-sided deafness. This suggests that anatomical and functional plastic changes are more likely to occur in the right than in the left auditory cortex. The possible perceptual correlates of such neurophysiological changes are discussed.</p

    Effects of gestational age at birth on cognitive performance : a function of cognitive workload demands

    Get PDF
    Objective: Cognitive deficits have been inconsistently described for late or moderately preterm children but are consistently found in very preterm children. This study investigates the association between cognitive workload demands of tasks and cognitive performance in relation to gestational age at birth. Methods: Data were collected as part of a prospective geographically defined whole-population study of neonatal at-risk children in Southern Bavaria. At 8;5 years, n = 1326 children (gestation range: 23–41 weeks) were assessed with the K-ABC and a Mathematics Test. Results: Cognitive scores of preterm children decreased as cognitive workload demands of tasks increased. The relationship between gestation and task workload was curvilinear and more pronounced the higher the cognitive workload: GA2 (quadratic term) on low cognitive workload: R2 = .02, p<0.001; moderate cognitive workload: R2 = .09, p<0.001; and high cognitive workload tasks: R2 = .14, p<0.001. Specifically, disproportionally lower scores were found for very (<32 weeks gestation) and moderately (32–33 weeks gestation) preterm children the higher the cognitive workload of the tasks. Early biological factors such as gestation and neonatal complications explained more of the variance in high (12.5%) compared with moderate (8.1%) and low cognitive workload tasks (1.7%). Conclusions: The cognitive workload model may help to explain variations of findings on the relationship of gestational age with cognitive performance in the literature. The findings have implications for routine cognitive follow-up, educational intervention, and basic research into neuro-plasticity and brain reorganization after preterm birth

    Hemispheric asymmetry of endogenous neural oscillations in young children: implications for hearing speech in noise

    Get PDF
    Speech signals contain information in hierarchical time scales, ranging from short-duration (e.g., phonemes) to long-duration cues (e.g., syllables, prosody). A theoretical framework to understand how the brain processes this hierarchy suggests that hemispheric lateralization enables specialized tracking of acoustic cues at different time scales, with the left and right hemispheres sampling at short (25 ms; 40 Hz) and long (200 ms; 5 Hz) periods, respectively. In adults, both speech-evoked and endogenous cortical rhythms are asymmetrical: low-frequency rhythms predominate in right auditory cortex, and high-frequency rhythms in left auditory cortex. It is unknown, however, whether endogenous resting state oscillations are similarly lateralized in children. We investigated cortical oscillations in children (3–5 years; N = 65) at rest and tested our hypotheses that this temporal asymmetry is evident early in life and facilitates recognition of speech in noise. We found a systematic pattern of increasing leftward asymmetry for higher frequency oscillations; this pattern was more pronounced in children who better perceived words in noise. The observed connection between left-biased cortical oscillations in phoneme-relevant frequencies and speech-in-noise perception suggests hemispheric specialization of endogenous oscillatory activity may support speech processing in challenging listening environments, and that this infrastructure is present during early childhood
    corecore