2,357 research outputs found

    The effects of timbre on neural responses to musical emotion

    Get PDF
    Timbre is an important factor that affects the perception of emotion in music. To date, little is known about the effects of timbre on neural responses to musical emotion. To address this issue, we used ERPs to investigate whether there are different neural responses to musical emotion when the same melodies are presented in different timbres. With a cross-modal affective priming paradigm, target faces were primed by affectively congruent or incongruent melodies without lyrics presented in violin, flute, and the voice. Results showed a larger P3 and a larger left anterior distributed LPC in response to affectively incongruent versus congruent trials in the voice version. For the flute version, however, only the LPC effect was found, which was distributed over centro-parietal electrodes. Unlike the voice and flute versions, an N400 effect was observed in the violin version. These findings revealed different patterns of neural responses to emotional processing of music when the same melodies were presented in different timbres, and provide evidence to confirm the hypothesis that there are specialized neural responses to the human voice

    The Good, The Bad, and The Funny: A Neurocognitive Study of Laughter as a Meaningful Socioemotional Cue

    Get PDF
    Laughter is a socioemotional cue that is characteristically positive and historically served to facilitate social bonding. Like other communicative gestures (e.g., facial expressions, groans, sighs), however, the interpretation of laughter is no longer bound to a particular affective state. Thus, an important question is how basic psychological mechanisms, such as early sensory arousal, emotion evaluation, and meaning representation, contribute to the interpretation of laughter in different contexts. A related question is how brain dynamic processes reflect these different aspects of laughter comprehension. The present study addressed these questions using event-related potentials (ERP) to examine laughter comprehension within a cross-modal priming paradigm. Target stimuli were visually presented words, which were preceded by either laughs or environmental sounds (500 ms versions of the International Affective Digitized Sounds, IADS). The study addressed four questions: (1) Does emotion priming lead to N400 effects? (2) Do positive and negative sounds elicit different neurocognitive responses? (3) Are there laughter-specific ERPs? (4) Can laughter priming of good and bad concepts be reversed under social anxiety? Four experiments were conducted. In all four experiments, participants were asked to make speeded judgments about the valence of the target words. Experiments 1-3 examined behavioral effects of emotion priming using variations on this paradigm. In Experiment 4, participants performed the task while their electroencephalographic (EEG) data were recorded. After six experimental blocks, a mood manipulation was administered to activate negative responses to laughter. The task was then repeated. Accuracy and reaction time showed a small but significant priming effect across studies. Surprisingly, N400 effects of emotion priming were absent. Instead, there was a later (~400–600 ms) effect over orbitofrontal electrodes (orbitofrontal priming effect, OPE). Valence-specific effects were observed in the early posterior negativity (EPN, ~275 ms) and in the late positive potential (LPP, ~600 ms). Laughter-specific effects were observed over orbitofrontal sites beginning approximately 200 ms after target onset. Finally, the OPE was observed for laughs before and after the mood manipulation. The direction of priming did not reverse, contrary to hypothesis. Interestingly, the OPE was observed for IADS only prior to the mood manipulation, providing some evidence for laughter-specific effects in emotion priming. These findings question the N400 as a marker of emotion priming and contribute to the understanding of neurocognitive stages of laughter perception. More generally, they add to the growing literature on the neurophysiology of emotion and emotion representation

    Comparing the Processing of Music and Language Meaning Using EEG and fMRI Provides Evidence for Similar and Distinct Neural Representations

    Get PDF
    Recent demonstrations that music is capable of conveying semantically meaningful information has raised several questions as to what the underlying mechanisms of establishing meaning in music are, and if the meaning of music is represented in comparable fashion to language meaning. This paper presents evidence showing that expressed affect is a primary pathway to music meaning and that meaning in music is represented in a very similar fashion to language meaning. In two experiments using EEG and fMRI, it was shown that single chords varying in harmonic roughness (consonance/dissonance) and thus perceived affect could prime the processing of subsequently presented affective target words, as indicated by an increased N400 and activation of the right middle temporal gyrus (MTG). Most importantly, however, when primed by affective words, single chords incongruous to the preceding affect also elicited an N400 and activated the right posterior STS, an area implicated in processing meaning of a variety of signals (e.g. prosody, voices, motion). This provides an important piece of evidence in support of music meaning being represented in a very similar but also distinct fashion to language meaning: Both elicit an N400, but activate different portions of the right temporal lobe

    Advances in the neurocognition of music and language

    Get PDF

    Impaired emotional processing of chords in congenital amusia: electrophysiological and behavioral evidence

    Get PDF
    This study investigated whether individuals with congenital amusia, a neurogenetic disorder of musical pitch perception, were able to process musical emotions in single chords either automatically or consciously. In Experiments 1 and 2, we used a cross-modal affective priming paradigm to elicit automatic emotional processing through ERPs, in which target facial expressions were preceded by either affectively congruent or incongruent chords with a stimulus onset asynchrony (SOA) of 200 msec. Results revealed automatic emotional processing of major/minor triads (Experiment 1) and consonant/dissonant chords (Experiment 2) in controls, who showed longer reaction times and increased N400 for incongruent than congruent trials, while amusics failed to exhibit such a priming effect at both behavioral and electrophysiological levels. In Experiment 3, we further examined conscious emotional evaluation of the same chords in amusia. Results showed that amusics were unable to consciously differentiate the emotions conveyed by major and minor chords and by consonant and dissonant chords, as compared with controls. These findings suggest the impairment in automatic and conscious emotional processing of music in amusia. The implications of these findings in relation to musical emotional processing are discussed

    Hearing Feelings: Affective Categorization of Music and Speech in Alexithymia, an ERP Study

    Get PDF
    Background: Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials. Methodology: Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets. Conclusions: Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required

    Cross-modal Transfer of Valence or Arousal from Music to Word Targets in Affective Priming?

    Get PDF
    This registered report considers how emotion induced in an auditory modality (music) can influence affective evaluations of visual stimuli (words). Specifically, it seeks to determine which emotional dimension is transferred across modalities – valence or arousal – or whether the transferred dimension depends on the focus of attention (feature-specific attention allocation). Two experiments were carried out. The first was an affective priming paradigm that will allow for the orthogonal manipulation of valence and arousal in both the words and music, alongside a manipulation to direct participants’ attention to either the valence or the arousal dimension. Secondly, a lexical decision task allowed cross-modal transfer of valence and arousal to be probed without the focus of participants’ attention being manipulated. Congruence effects were present in the affective priming task – valence was transferred in both the valence and arousal tasks, whereas arousal was transferred in the arousal task only. Contrary to predictions, the lexical decision task did not exhibit any congruence effects

    Valenced Priming with Acquired Affective Concepts in Music: Automatic Reactions to Common Tonal Chords

    Get PDF
    This study tested whether chords that do not differ in acoustic roughness but that have distinct affective connotations are strong enough to prime negative and positive associations measurable with an affective priming method. We tested whether musically dissonant chords low in valence (diminished, augmented) but that contain little acoustic roughness are strong enough in terms of negative affective connotations to elicit an automatic congruence effect in an affective priming setting, comparable to the major-positive/minor-negative distinction found in past studies. Three out of 4 hypotheses were supported by the empirical data obtained from four distinct sub-experiments (approximately N = 100 each) where the diminished and augmented chords created strong priming effects. Conversely, the minor chord and the suspended fourth failed to generate priming effects. The results demonstrate how automatic responses to consonant/dissonant chords can be driven by acquired, cultural concepts rather than exclusively by acoustic features. The obtained results of automatic responses are notably in line with previous data gathered from self-report studies in terms of the stimuli’s positive vs. negative valence. The results are discussed from the point of view of previous affective priming studies, cross-cultural research, as well as music historical observations
    • …
    corecore