26 research outputs found

    Data_Sheet_1_Gender Differences in the Recognition of Vocal Emotions.docx

    No full text
    <p>The conflicting findings from the few studies conducted with regard to gender differences in the recognition of vocal expressions of emotion have left the exact nature of these differences unclear. Several investigators have argued that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories. This study aimed to account for these factors by investigating whether emotion recognition from vocal expressions differs as a function of both listeners' and speakers' gender. A total of N = 290 participants were randomly and equally allocated to two groups. One group listened to words and pseudo-words, while the other group listened to sentences and affect bursts. Participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. Overall, females were more accurate than males when decoding vocal emotions, however, when testing for specific emotions these differences were small in magnitude. Speakers' gender had a significant impact on how listeners' judged emotions from the voice. The group listening to words and pseudo-words had higher identification rates for emotions spoken by male than by female actors, whereas in the group listening to sentences and affect bursts the identification rates were higher when emotions were uttered by female than male actors. The mixed pattern for emotion-specific effects, however, indicates that, in the vocal channel, the reliability of emotion judgments is not systematically influenced by speakers' gender and the related stereotypes of emotional expressivity. Together, these results extend previous findings by showing effects of listeners' and speakers' gender on the recognition of vocal emotions. They stress the importance of distinguishing these factors to explain recognition ability in the processing of emotional prosody.</p

    Mean EPN and LPC amplitudes.

    No full text
    <p>Mean amplitudes and Standard Errors of EPN and LPC separately for each emotion category and size conditions.</p

    Descriptive statistics (Means and Standard Deviations) of stimulus words.

    No full text
    <p>Descriptive statistics (Means and Standard Deviations) of stimulus words.</p

    ERP effects of emotion and size.

    No full text
    <p>(A) Grand mean ERP waveforms for positive, neutral, and negative words of small and large size, collapsed over posterior EPN (ROI) electrodes. Scalp distributions show differences between emotional (positive and negative) and neutral words at indicated time intervals. (B) Grand means at centroparietal LPC electrodes and topographies of the late posterior complex as difference between emotional and neutral words in the indicated time range. (C) Effects of stimulus size on grand means over posterior EPN electrodes and scalp distribution of difference waves between large and small words in the time interval of the early posterior negativity. (D) Grand means for small and large words at centroparietal LPC electrodes and scalp distributions of difference ERPs between large and small words in the interval of the late positive complex.</p

    Response-synchronized ERPs from the Simon task.

    No full text
    <p>Left panel: ERPs at electrode Cz, superimposed for correct and incorrect (corr., incorr.) responses and pre-meal and post-meal sessions (S1, S2) and for the Experimental and Control group. Topographies of the Ne as the difference between incorrect and correct responses are depicted to the right of the waveforms. Right panel: Same as left panel but for electrode Pz (please note changes in voltage and time scales). Topographies of error positivities (350-550 ms) are shown to the right of the waveforms.</p

    Grand mean ERPs and scalp distributions to correct and incorrect adjectives.

    No full text
    <p>(A) ERP waveforms for correct adjectives and three violation conditions referred to a 200-ms prestimulus baseline. Time windows for the N400/LAN and P600 effects are shaded. (B and C) Scalp distributions for the main effect of semantics in the LAN/N400 window (400–450 ms) and for the main effect of grammaticality in the P600 time window (550 to 800 ms), respectively, for both the previous study of Martin-Loeches et al. (2006) and the present study. Please note the differences in amplitude scaling.</p

    ANOVA results – syntactic condition.

    No full text
    <p>Note. <i>F</i>-values with <i>p</i> (***<.001, **<.01, *<.05) and <i>ε</i> for Greenhouse-Geisser correction. Only significant results are reported.</p

    Overview about experimental conditions.

    No full text
    <p>Schematic overview about conditions and their resulting semantic and syntactic relations between the visual noun and adjective of Task 1 and between the acoustic adjective of Task 2 and the both visual noun and adjective.</p
    corecore