6 research outputs found

    The Role of Perceived Voice and Speech Characteristics in Vocal Emotion Communication.

    No full text
    Aiming at a more comprehensive assessment of nonverbal vocal emotion communication, this article presents the development and validation of a new rating instrument for the assessment of perceived voice and speech features. In two studies, using two different sets of emotion portrayals by German and French actors, ratings of perceived voice and speech characteristics (loudness, pitch, intonation, sharpness, articulation, roughness, instability, and speech rate) were obtained from non-expert (untrained) listeners. In addition, standard acoustic parameters were extracted from the voice samples. Overall, highly similar patterns of results were found in both studies. Rater agreement (reliability) reached highly satisfactory levels for most features. Multiple discriminant analysis results reveal that both perceived vocal features and acoustic parameters allow a high degree of differentiation of the actor-portrayed emotions. Positive emotions can be classified with a higher hit rate on the basis of perceived vocal features, confirming suggestions in the literature that it is difficult to find acoustic valence indicators. The results show that the suggested scales (Geneva Voice Perception Scales) can be reliably measured and make a substantial contribution to a more comprehensive assessment of the process of emotion inferences from vocal expression

    Understanding the recognition of facial identity and facial expression

    No full text
    Faces convey a wealth of social signals. A dominant view in face-perception research has been that the recognition of facial identity and facial expression involves separable visual pathways at the functional and neural levels, and data from experimental, neuropsychological, functional imaging and cell-recording studies are commonly interpreted within this framework. However, the existing evidence supports this model less strongly than is often assumed. Alongside this two-pathway framework, other possible models of facial identity and expression recognition, including one that has emerged from principal component analysis techniques, should be considered
    corecore