644 research outputs found

    Rhythmic and melodic deviations in musical sequences recruit different cortical areas for mismatch detection

    Get PDF
    The mismatch negativity (MMN), an event-related potential (ERP) representing the violation of an acoustic regularity, is considered as a pre-attentive change detection mechanism at the sensory level on the one hand and as a prediction error signal on the other hand, suggesting that bottom-up as well as top-down processes are involved in its generation. Rhythmic and melodic deviations within a musical sequence elicit a MMN in musically trained subjects, indicating that acquired musical expertise leads to better discrimination accuracy of musical material and better predictions about upcoming musical events. Expectation violations to musical material could therefore recruit neural generators that reflect top-down processes that are based on musical knowledge. We describe the neural generators of the musical MMN for rhythmic and melodic material after a short-term sensorimotor-auditory (SA) training. We compare the localization of musical MMN data from two previous MEG studies by applying beamformer analysis. One study focused on the melodic harmonic progression whereas the other study focused on rhythmic progression. The MMN to melodic deviations revealed significant right hemispheric neural activation in the superior temporal gyrus (STG), inferior frontal cortex (IFC), and the superior frontal (SFG) and orbitofrontal (OFG) gyri. IFC and SFG activation was also observed in the left hemisphere. In contrast, beamformer analysis of the data from the rhythm study revealed bilateral activation within the vicinity of auditory cortices and in the inferior parietal lobule (IPL), an area that has recently been implied in temporal processing. We conclude that different cortical networks are activated in the analysis of the temporal and the melodic content of musical material, and discuss these networks in the context of the dual-pathway model of auditory processing

    Speech dysprosody but no music ‘dysprosody’ in Parkinson’s disease

    Get PDF
    AbstractParkinson’s disease is characterized not only by bradykinesia, rigidity, and tremor, but also by impairments of expressive and receptive linguistic prosody. The facilitating effect of music with a salient beat on patients’ gait suggests that it might have a similar effect on vocal behavior, however it is currently unknown whether singing is affected by the disease. In the present study, fifteen Parkinson patients were compared with fifteen healthy controls during the singing of familiar melodies and improvised melodic continuations. While patients’ speech could reliably be distinguished from that of healthy controls matched for age and gender, purely on the basis of aural perception, no significant differences in singing were observed, either in pitch, pitch range, pitch variability, and tempo, or in scale tone distribution, interval size or interval variability. The apparent dissociation of speech and singing in Parkinson’s disease suggests that music could be used to facilitate expressive linguistic prosody

    Using Spatial Manipulation to Examine Interactions between Visual and Auditory Encoding of Pitch and Time

    Get PDF
    Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians

    Cantus: Construction and evaluation of a software solution for real-time vocal music training and musical intonation assessment

    Get PDF
    The development of the ability to sing or play in tune is one of the most critical tasks in music training. In music education, melodic patterns are usually learned by imitative processes (modelling). Once modelled, pitch sounds are then associ-ated to a representation according to a syllabic system such as the Guidonian system - or an arbitrary single syllable - or western graphic notation system symbols. From a didactic standpoint, few advances have been made in this area besides the use of audio-supported guides and existing software, which use a microphone to analyse the input and estimate the pitch or fundamental frequency of the given tone. However, these programmes lack the necessary analytical algo-rithm to provide the student with precise feedback on their execution; and also they do not provide adequate noise-robust solutions to minimize the student assessment error rate. The ongoing research discussed in this article focuses on Cantus, a new software solution expressly designed as an assessment and diagnosis tool for online training and assessment of vocal musical intonation at the initial stages of music education. Cantus software embodies the latest research on real-time analy-sis of audio stream, which permits the teacher to customize music training by means of recording patterns and embedding them into the programme. The study presented in this article includes the design, implementation and assessment of Cantus by music teachers. The pilot study for the software assessment includes a sample of 21 music teachers working at thirteen music schools in Valencia, Spain. These teachers worked with the software at their own pace for a week in order to evaluate it. Subsequently, a two-part questionnaire was filled in with (1) ques-tions related to demographics, professional experience and the use of ITC; and (2) questions related to the software's technical and didactic aspects. The question-naire also included three open questions related to Cantus, namely advantages, issues and suggestions. The results show an excellent reception by teachers, who consider this software as a highly adequate music training tool at the initial stages of music education

    Preserved singing in aphasia: A case study of the efficacy of melodic intonation therapy

    Get PDF
    This study examined the efficacy of Melodic Intonation Therapy (MIT) in a male singer (KL) with severe Broca’s aphasia. Thirty novel phrases were allocated to one of three experimental conditions: unrehearsed, rehearsed verbal production (repetition), and rehearsed verbal production with melody (MIT). The results showed superior production of MIT phrases during therapy. Comparison of performance at baseline, 1 week, and 5 weeks after therapy revealed an initial beneficial effect of both types of rehearsal; however, MIT was more durable, facilitating longer-term phrase production. Our findings suggest that MIT facilitated KL’s speech praxis, and that combining melody and speech through rehearsal promoted separate storage and/or access to the phrase representation

    Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation

    Get PDF
    To investigate the neural substrates that underlie spontaneous musical performance, we examined improvisation in professional jazz pianists using functional MRI. By employing two paradigms that differed widely in musical complexity, we found that improvisation (compared to production of over-learned musical sequences) was consistently characterized by a dissociated pattern of activity in the prefrontal cortex: extensive deactivation of dorsolateral prefrontal and lateral orbital regions with focal activation of the medial prefrontal (frontal polar) cortex. Such a pattern may reflect a combination of psychological processes required for spontaneous improvisation, in which internally motivated, stimulus-independent behaviors unfold in the absence of central processes that typically mediate self-monitoring and conscious volitional control of ongoing performance. Changes in prefrontal activity during improvisation were accompanied by widespread activation of neocortical sensorimotor areas (that mediate the organization and execution of musical performance) as well as deactivation of limbic structures (that regulate motivation and emotional tone). This distributed neural pattern may provide a cognitive context that enables the emergence of spontaneous creative activity

    Musical stem completion: Humming that note

    Get PDF
    This study looked at how people store and retrieve tonal music explicitly and implicitly using a production task. Participants completed an implicit task (tune stem completion) followed by an explicit task (cued recall). The tasks were identical except for the instructions at test time. They listened to tunes and were then presented with tune stems from previously heard tunes and novel tunes. For the implicit task, they were asked to sing a note they thought would come next musically. For the explicit task, they were asked to sing the note they remembered as coming next. Experiment 1 found that people correctly completed significantly more old stems than new stems. Experiment 2 investigated the characteristics of music that fuel retrieval by varying a surface feature of the tune (same timbre ordifferent timbre) from study to test and the encoding task (semantic or nonsemantic). Although we did not find that implicit and explicit memory for music were significantly dissociated for levels of processing, we did find that surface features of music affect semantic judgments and subsequent explicit retrieval
    • …
    corecore