33 research outputs found

    Attractiveness and distinctiveness between speakers' voices in naturalistic speech and their faces are uncorrelated

    Get PDF
    honest signal hypothesis, attractiveness, averageness, face, distinctiveness, voic

    Discrimination in the workplace, reported by people with major depressive disorder: a cross-sectional study in 35 countries.

    Get PDF
    OBJECTIVE: Whereas employment has been shown to be beneficial for people with Major Depressive Disorder (MDD) across different cultures, employers' attitudes have been shown to be negative towards workers with MDD. This may form an important barrier to work participation. Today, little is known about how stigma and discrimination affect work participation of workers with MDD, especially from their own perspective. We aimed to assess, in a working age population including respondents with MDD from 35 countries: (1) if people with MDD anticipate and experience discrimination when trying to find or keep paid employment; (2) if participants in high, middle and lower developed countries differ in these respects; and (3) if discrimination experiences are related to actual employment status (ie, having a paid job or not). METHOD: Participants in this cross-sectional study (N=834) had a diagnosis of MDD in the previous 12 months. They were interviewed using the Discrimination and Stigma Scale (DISC-12). Analysis of variance and generalised linear mixed models were used to analyse the data. RESULTS: Overall, 62.5% had anticipated and/or experienced discrimination in the work setting. In very high developed countries, almost 60% of respondents had stopped themselves from applying for work, education or training because of anticipated discrimination. Having experienced workplace discrimination was independently related to unemployment. CONCLUSIONS: Across different countries and cultures, people with MDD very frequently reported discrimination in the work setting. Effective interventions are needed to enhance work participation in people with MDD, focusing simultaneously on decreasing stigma in the work environment and on decreasing self-discrimination by empowering workers with MDD

    Attractiveness and distinctiveness in voices and faces of young adults

    No full text
    Facial attractiveness has been linked to the averageness (or typicality) of a face. More tentatively, it has also been linked to a speaker’s vocal attractiveness, via the “honest signal” hypothesis, holding that attractiveness signals good genes. In four experiments, we assessed ratings for attractiveness and two common measures of distinctiveness (“distinctiveness-in-thecrowd”- DITC and “deviation-based distinctiveness”-DEV) for faces and voices (vowels or sentences) from 64 young adult speakers (32 female). Consistent and strong negative correlations between attractiveness and DEV generally supported the averageness account of attractiveness for both voices and faces. By contrast, indicating that both measures of distinctiveness reflect different constructs, correlations between attractiveness and DITC were numerically positive for faces (though small and non-significant), and significant for voices in sentence stimuli. As the only exception, voice ratings based on vowels exhibited a moderate but significant negative correlation between attractiveness and DITC. Between faces and voices, distinctiveness ratings were uncorrelated. Remarkably, and at variance with the honest signal hypothesis, vocal and facial attractiveness were uncorrelated, with the exception of a moderate positive correlation for vowels. Overall, while our findings strongly support an averageness account of attractiveness for both domains, they provide little evidence for an honest signal account of facial and vocal attractiveness in complex naturalistic speech. Although our findings for vowels do not rule out the tentative notion that more primitive vocalizations can provide relevant clues to genetic fitness, researchers should carefully consider the nature of voice samples, and the degree to which these are representative of human vocal communication

    Neural Correlates of Voice Learning with Distinctive and Non-Distinctive Faces

    No full text
    Recognizing people from their voices may be facilitated by a voice’s distinctiveness, in a manner similar to that which has been reported for faces. However, little is known about the neural time-course of voice learning and the role of facial information in voice learning. Based on evidence for audiovisual integration in the recognition of familiar people, we studied the behavioral and electrophysiological correlates of voice learning associated with distinctive or non-distinctive faces. We repeated twelve unfamiliar voices uttering short sentences, together with either distinctive or non-distinctive faces (depicted before and during voice presentation) in six learning-test cycles. During learning, distinctive faces increased early visually-evoked (N170, P200, N250) potentials relative to non-distinctive faces, and face distinctiveness modulated voice-elicited slow EEG activity at the occipito–temporal and fronto-central electrodes. At the test, unimodally-presented voices previously learned with distinctive faces were classified more quickly than were voices learned with non-distinctive faces, and also more quickly than novel voices. Moreover, voices previously learned with faces elicited an N250-like component that was similar in topography to that typically observed for facial stimuli. The preliminary source localization of this voice-induced N250 was compatible with a source in the fusiform gyrus. Taken together, our findings provide support for a theory of early interaction between voice and face processing areas during both learning and voice recognition

    The Jena Voice Learning and Memory Test (JVLMT): A standardized tool for assessing the ability to learn and recognize voices

    No full text
    Humble D, Schweinberger SR, Mayer A, Dobel C, Zäske R. The Jena Voice Learning and Memory Test (JVLMT): A standardized tool for assessing the ability to learn and recognize voices. PsyArXiv. 2021.The ability to recognize someone’s voice exists on a broad spectrum with phonagnosia on the low end and super recognition at the high end. Yet there is no standardized test to measure an individual’s ability of learning and recognizing newly-learnt voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 22min-test based on item response theory and applicable across languages. The JVLMT consists of three phases in which participants first become familiarized with eight speakers and then perform a three-alternative forced choice recognition task, using pseudo sentences devoid of semantics. Acoustic (dis)similarity analyses were used to create items with different levels of difficulty. Test scores are based on 22 Rasch-conform items. Items were selected based on 232 and validated based on 454 participants in an online study. Mean accuracy is 0.51 with an SD of .18. The JVLMT showed high and moderate correlations with the convergent validation tests (Bangor Voice Matching Test; Glasgow Voice Memory Test, respectively) and a weak correlation with the discriminant validation test (Digit Span). Empirical (marginal) reliability is 0.66. Four participants with super recognition abilities and 7 participants with phonagnosia were identified (at least 2 SDs above or below the mean, respectively).The JVLMT is a promising diagnostic tool to screen for voice recognition abilities in a scientific and neuropsychological context

    Group membership and the effects on visual perspective taking

    No full text
    It has been hypothesized that visual perspective-taking, a basic Theory of Mind mechanism, might operate quite automatically particularly in terms of ´what´ someone else sees. As such we were interested in whether different social categories of an agent (e.g., gender, race, nationality) influence this mental state ascription mechanism. We tested this assumption by investigating the Samson level-1 visual perspective-taking paradigm using agents with different ethnic nationality appearances. A group of self-identified Turkish and German participants were asked to make visual perspective judgments from their own perspective (self-judgment) as well as from the perspective of a prototypical Turkish or German agent (other-judgment). The respective related interference effects - altercentric and egocentric interferences - were measured. When making other-judgments, German participants showed increased egocentric interferences for Turkish compared to German agents. Turkish participants showed no ethnic group influence for egocentric interferences and reported feeling associated with the German and Turkish nationality to a similar extent. For self-judgments, altercentric interferences were of similar magnitude for both ethnic agents in both participant groups. Overall this indicates that in level-1 visual perspective-taking, other-judgments and related egocentric interferences are sensitive to social categories and are better described as a flexible, controlled and deliberate mental state ascription mechanism. In contrast, self-judgments and related altercentric interference effects are better described as automatic, efficient and unconscious mental state ascription mechanisms. In a broader sense the current results suggest that we should stop considering automaticity an all-or-none principle when it comes theory of mind processes

    Bimodal learning effect depicted as mean d’- differences between face-voice (FV) and voice-only (V) modality conditions for pairs of consecutive study-test cycles in Exp.

    No full text
    <p>1 (static faces) and Exp. 2 (dynamic faces). Note that increasing benefits of bimodal learning from cycle pairs 1_2 towards 5_6 were independent of face animation mode (static vs. dynamic). Error bars are standard errors of the mean (SEM).</p

    Mean d’ (±SEM) for the factors face animation mode (static vs. dynamic), learning modality (face-voice [FV] vs. voice [V]), and cycle pairs (1_2; 3_4; 5_6).

    No full text
    <p>Mean d’ (±SEM) for the factors face animation mode (static vs. dynamic), learning modality (face-voice [FV] vs. voice [V]), and cycle pairs (1_2; 3_4; 5_6).</p

    Top: Trial procedure for the study phases depicted for the voice-only block (V) and the face-voice block (FV).

    No full text
    <p>Bottom: Trial procedure in the test phases following V and FV learning. Note that face stimuli were static pictures in Exp. 1 and dynamic videos in Exp. 2. The individual depicted in this figure has given written informed consent (as outlined in PLOS consent form) to publish this photograph.</p
    corecore