Cross-classification of musical and vocal emotions in the auditory cortex

Abstract

Whether emotions carried by voice and music are processed by the brain using similar mechanisms has long been investigated. Yet neuroimaging studies do not provide a clear picture, mainly due to lack of control over stimuli. Here, we report a functional magnetic resonance imaging (fMRI) study using comparable stimulus material in the voice and music domains—the Montreal Affective Voices and the Musical Emotional Bursts—which include nonverbal short bursts of happiness, fear, sadness, and neutral expressions. We use a multivariate emotion‐classification fMRI analysis involving cross‐timbre classification as a means of comparing the neural mechanisms involved in processing emotional information in the two domains. We find, for affective stimuli in the violin, clarinet, or voice timbres, that local fMRI patterns in the bilateral auditory cortex and upper premotor regions support above‐chance emotion classification when training and testing sets are performed within the same timbre category. More importantly, classifier performance generalized well across timbre in cross‐classifying schemes, albeit with a slight accuracy drop when crossing the voice–music boundary, providing evidence for a shared neural code for processing musical and vocal emotions, with possibly a cost for the voice due to its evolutionary significance

    Similar works

    Full text

    thumbnail-image

    Enlighten

    redirect
    Last time updated on 15/06/2018

    This paper was published in Enlighten.

    Having an issue?

    Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.