1,071 research outputs found

    Music, memory and emotion

    Get PDF
    Because emotions enhance memory processes and music evokes strong emotions, music could be involved in forming memories, either about pieces of music or about episodes and information associated with particular music. A recent study in BMC Neuroscience has given new insights into the role of emotion in musical memory

    Music drives brain plasticity

    Get PDF
    Music is becoming more and more of an issue in the cognitive neurosciences. A major finding in this research area is that musical practice is associated with structural and functional plasticity of the brain. In this brief review, I will give an overview of the most recent findings of this research area

    Correction: Music listening while you learn: No influence of background music on verbal learning

    Get PDF
    BACKGROUND: Whether listening to background music enhances verbal learning performance is still disputed. In this study we investigated the influence of listening to background music on verbal learning performance and the associated brain activations. METHODS: Musical excerpts were composed for this study to ensure that they were unknown to the subjects and designed to vary in tempo (fast vs. slow) and consonance (in-tune vs. out-of-tune). Noise was used as control stimulus. 75 subjects were randomly assigned to one of five groups and learned the presented verbal material (non-words with and without semantic connotation) with and without background music. Each group was exposed to one of five different background stimuli (in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, and noise). As dependent variable, the number of learned words was used. In addition, event-related desynchronization (ERD) and event-related synchronization (ERS) of the EEG alpha-band were calculated as a measure for cortical activation. RESULTS: We did not find any substantial and consistent influence of background music on verbal learning. There was neither an enhancement nor a decrease in verbal learning performance during the background stimulation conditions. We found however a stronger event-related desynchronization around 800 - 1200 ms after word presentation for the group exposed to in-tune fast music while they learned the verbal material. There was also a stronger event-related synchronization for the group exposed to out-of-tune fast music around 1600 - 2000 ms after word presentation. CONCLUSION: Verbal learning during the exposure to different background music varying in tempo and consonance did not influence learning of verbal material. There was neither an enhancing nor a detrimental effect on verbal learning performance. The EEG data suggest that the different acoustic background conditions evoke different cortical activations. The reason for these different cortical activations is unclear. The most plausible reason is that when background music draws more attention verbal learning performance is kept constant by the recruitment of compensatory mechanisms

    Music and the heart

    Get PDF
    Music can powerfully evoke and modulate emotions and moods, along with changes in heart activity, blood pressure (BP), and breathing. Although there is great heterogeneity in methods and quality among previous studies on effects of music on the heart, the following findings emerge from the literature: Heart rate (HR) and respiratory rate (RR) are higher in response to exciting music compared with tranquilizing music. During musical frissons (involving shivers and piloerection), both HR and RR increase. Moreover, HR and RR tend to increase in response to music compared with silence, and HR appears to decrease in response to unpleasant music compared with pleasant music. We found no studies that would provide evidence for entrainment of HR to musical beats. Corresponding to the increase in HR, listening to exciting music (compared with tranquilizing music) is associated with a reduction of heart rate variability (HRV), including reductions of both low-frequency and high-frequency power of the HRV. Recent findings also suggest effects of music-evoked emotions on regional activity of the heart, as reflected in electrocardiogram amplitude patterns. In patients with heart disease (similar to other patient groups), music can reduce pain and anxiety, associated with lower HR and lower BP. In general, effects of music on the heart are small, and there is great inhomogeneity among studies with regard to methods, findings, and quality. Therefore, there is urgent need for systematic high-quality research on the effects of music on the heart, and on the beneficial effects of music in clinical setting

    Identification of individual subjects based on neuroanatomical measures obtained 7 years earlier

    Full text link
    We analysed a dataset comprising 118 subjects who were scanned three times (at baseline, 1-year follow-up, and 7-year follow-up) using structural magnetic resonance imaging (MRI) over the course of 7 years. We aimed to examine whether it is possible to identify individual subjects based on a restricted number of neuroanatomical features measured 7 years previously. We used FreeSurfer to compute 15 standard brain measures (total intracranial volume [ICV], total cortical thickness [CT], total cortical surface area [CA], cortical grey matter [CoGM], cerebral white matter [CeWM], cerebellar cortex [CBGM], cerebellar white matter [CBWM], subcortical volumes [thalamus, putamen, pallidum, caudatus, hippocampus, amygdala and accumbens] and brain stem volume). We used linear discriminant analysis (LDA), random forest machine learning (RF) and a newly developed rule-based identification approach (RBIA) for the identification process. Using RBIA, different sets of neuroanatomical features (ranging from 2 to 14) obtained at baseline were combined by if-then rules and compared to the same set of neuroanatomical features derived from the 7-year follow-up measurement. We achieved excellent identification results with LDA, while the identification results for RF were very good but not perfect. The RBIA produced the best results, achieving perfect participant identification for some four-feature sets. The identification results improved substantially when using larger feature sets, with 14 neuroanatomical features providing perfect identification. Thus, this study shows again that the human brain is highly individual in terms of neuroanatomical features. These results are discussed in the context of the current literature on brain plasticity and the scientific attempts to develop brain-fingerprinting techniques

    Time Course of Neural Activity Correlated with Colored-Hearing Synesthesia

    Get PDF
    Synesthesia is defined as the involuntary and automatic perception of a stimulus in 2 or more sensory modalities (i.e., cross-modal linkage). Colored-hearing synesthetes experience colors when hearing tones or spoken utterances. Based on event-related potentials we employed electric brain tomography with high temporal resolution in colored-hearing synesthetes and nonsynesthetic controls during auditory verbal stimulation. The auditory-evoked potentials to words and letters were different between synesthetes and controls at the N1 and P2 components, showing longer latencies and lower amplitudes in synesthetes. The intracerebral sources of these components were estimated with low-resolution brain electromagnetic tomography and revealed stronger activation in synesthetes in left posterior inferior temporal regions, within the color area in the fusiform gyrus (V4), and in orbitofrontal brain regions (ventromedial and lateral). The differences occurred as early as 122 ms after stimulus onset. Our findings replicate and extend earlier reports with functional magnetic resonance imaging and positron emission tomography in colored-hearing synesthesia and contribute new information on the time course in synesthesia demonstrating the fast and possibly automatic processing of this unusual and remarkable phenomeno

    Coherence and phase locking of intracerebral activation during visuo- and audio-motor learning of continuous tracking movements

    Get PDF
    The aim of the present study was to assess changes in EEG coherence and phase locking between fronto-parietal areas, including the frontal and parietal motor areas, during early audio- and visuo-motor learning of continuous tracking movements. Subjects learned to turn a steering-wheel according to a given trajectory in order to minimise the discrepancy between a changing foreground stimulus (controllable by the subjects) and a constant background stimulus (uncontrollable) for both the auditory and the visual modality. In the auditory condition, we uncovered a learning-related increase in inter-hemispheric phase locking between inferior parietal regions, suggesting that coupling between areas involved in audiomotor integration is augmented during early learning stages. Intra-hemispheric phase locking between motor and superior parietal areas increased in the left hemisphere as learning progressed, indicative of integrative processes of spatial information and movement execution. Further tests show a significant correlation of intra-hemispheric phase locking between the motor and the parietal area bilaterally and movement performance in the visual condition. These results suggest that the motor-parietal network is operative in the auditory and in the visual condition. This study confirms that a complex fronto-parietal network subserves learning of a new movement that requires sensorimotor transformation and demonstrates the importance of interregional coupling as a neural correlate for successful acquisition and implementation of externally guided behaviou

    The Human Likeness Dimension of the “Uncanny Valley Hypothesis”: Behavioral and Functional MRI Findings

    Get PDF
    The uncanny valley hypothesis (Mori, 1970) predicts differential experience of negative and positive affect as a function of human likeness. Affective experience of humanlike robots and computer-generated characters (avatars) dominates “uncanny” research, but findings are inconsistent. Importantly, it is unknown how objects are actually perceived along the hypothesis’ dimension of human likeness (DOH), defined in terms of human physical similarity. To examine whether the DOH can also be defined in terms of effects of categorical perception (CP), stimuli from morph continua with controlled differences in physical human likeness between avatar and human faces as endpoints were presented. Two behavioral studies found a sharp category boundary along the DOH and enhanced visual discrimination (i.e., CP) of fine-grained differences between pairs of faces at the category boundary. Discrimination was better for face pairs presenting category change in the human-to-avatar than avatar-to-human direction along the DOH. To investigate brain representation of physical change and category change along the DOH, an event-related functional magnetic resonance imaging study used the same stimuli in a pair-repetition priming paradigm. Bilateral mid-fusiform areas and a different right mid-fusiform area were sensitive to physical change within the human and avatar categories, respectively, whereas entirely different regions were sensitive to the human-to-avatar (caudate head, putamen, thalamus, red nucleus) and avatar-to-human (hippocampus, amygdala, mid-insula) direction of category change. These findings show that Mori’s DOH definition does not reflect subjective perception of human likeness and suggest that future “uncanny” studies consider CP and the DOH’s category structure in guiding experience of non-human objects

    Neurofunctional and Behavioral Correlates of Phonetic and Temporal Categorization in Musically Trained and Untrained Subjects

    Get PDF
    The perception of rapidly changing verbal and nonverbal auditory patterns is a fundamental prerequisite for speech and music processing. Previously, the left planum temporale (PT) has been consistently shown to support the discrimination of fast changing verbal and nonverbal sounds. Furthermore, it has been repeatedly shown that the functional and structural architecture of this supratemporal brain region differs as a function of musical training. In the present study, we used the functional magnetic resonance imaging technique, in a sample of professional musicians and nonmusicians, in order to examine the functional contribution of the left PT to the categorization of consonant-vowel syllables and their reduced-spectrum analogues. In line with our hypothesis, the musicians showed enhanced brain responses in the left PT and superior discrimination abilities in the reduced-spectrum condition. Moreover, we found a positive correlation between the responsiveness of the left PT and the performance in the reduced-spectrum condition across all subjects irrespective of musical expertise. These results have implications for our understanding of musical expertise in relation to segmental speech processin
    corecore