33 research outputs found

    Empowering mimicry: Female leader role models empower women in leadership tasks through body posture mimicry

    Get PDF
    In two studies we investigated the behavioral process through which visible female leader role models empower women in leadership tasks. We proposed that women tend to mimic the powerful (open) body postures of successful female role models, thus leading to more empowered behavior and better performance on a challenging leadership task, a process we called empowering mimicry. In Study 1, we experimentally manipulated the body posture of the male and female role models and showed that 86 Swiss college women mimicked the body posture of the female (ingroup) but not the male (outgroup) role model, thus leading to more empowered behavior and better performance on a public speaking task. In Study 2, we investigated the boundary conditions of this process and showed that empowering mimicry does not extend to exposures to non-famous female models among 50 Swiss college women. These findings suggest that nonverbal mimicry is one important mechanism through which female leader role models inspire women performing a challenging leadership task. From a practice perspective, our research underscores the importance of female leaders’ visibility because visibility can drive other women’s advancement in leadership by affording women the opportunity to mimic and be empowered by successful female role models

    Human Centric Facial Expression Recognition

    Get PDF
    Facial expression recognition (FER) is an area of active research, both in computer science and in behavioural science. Across these domains there is evidence to suggest that humans and machines find it easier to recognise certain emotions, for example happiness, in comparison to others. Recent behavioural studies have explored human perceptions of emotion further, by evaluating the relative contribution of features in the face when evaluating human sensitivity to emotion. It has been identified that certain facial regions have more salient features for certain expressions of emotion, especially when emotions are subtle in nature. For example, it is easier to detect fearful expressions when the eyes are expressive. Using this observation as a starting point for analysis, we similarly examine the effectiveness with which knowledge of facial feature saliency may be integrated into current approaches to automated FER. Specifically, we compare and evaluate the accuracy of ‘full-face’ versus upper and lower facial area convolutional neural network (CNN) modelling for emotion recognition in static images, and propose a human centric CNN hierarchy which uses regional image inputs to leverage current understanding of how humans recognise emotions across the face. Evaluations using the CK+ dataset demonstrate that our hierarchy can enhance classification accuracy in comparison to individual CNN architectures, achieving overall true positive classification in 93.3% of cases

    The Muslim headscarf and face perception: "they all look the same, don't they?"

    Get PDF
    YesThe headscarf conceals hair and other external features of a head (such as the ears). It therefore may have implications for the way in which such faces are perceived. Images of faces with hair (H) or alternatively, covered by a headscarf (HS) were used in three experiments. In Experiment 1 participants saw both H and HS faces in a yes/no recognition task in which the external features either remained the same between learning and test (Same) or switched (Switch). Performance was similar for H and HS faces in both the Same and Switch condition, but in the Switch condition it dropped substantially compared to the Same condition. This implies that the mere presence of the headscarf does not reduce performance, rather, the change between the type of external feature (hair or headscarf) causes the drop in performance. In Experiment 2, which used eye-tracking methodology, it was found that almost all fixations were to internal regions, and that there was no difference in the proportion of fixations to external features between the Same and Switch conditions, implying that the headscarf influenced processing by virtue of extrafoveal viewing. In Experiment 3, similarity ratings of the internal features of pairs of HS faces were higher than pairs of H faces, confirming that the internal and external features of a face are perceived as a whole rather than as separate components.The Educational Charity of the Federation of Ophthalmic and Dispensing Opticians

    Examining ecological validity in social interaction: problems of visual fidelity, gaze, and social potential

    Get PDF
    Social interaction is an essential part of the human experience, and much work has been done to study it. However, several common approaches to examining social interactions in psychological research may inadvertently either unnaturally constrain the observed behaviour by causing it to deviate from naturalistic performance, or introduce unwanted sources of variance. In particular, these sources are the differences between naturalistic and experimental behaviour that occur from changes in visual fidelity (quality of the observed stimuli), gaze (whether it is controlled for in the stimuli), and social potential (potential for the stimuli to provide actual interaction). We expand on these possible sources of extraneous variance and why they may be important. We review the ways in which experimenters have developed novel designs to remove these sources of extraneous variance. New experimental designs using a ‘two-person’ approach are argued to be one of the most effective ways to develop more ecologically valid measures of social interaction, and we suggest that future work on social interaction should use these designs wherever possible

    Configural and featural processing in humans with congenital prosopagnosia.

    Get PDF
    Prosopagnosia describes the failure to recognize faces, a deficiency that can be devastating in social interactions. Cases of acquired prosopagnosia have often been described over the last century. In recent years, more and more cases of congenital prosopagnosia (CP) have been reported. In the present study we tried to determine possible cognitive characteristics of this impairment. We used scrambled and blurred images of faces, houses, and sugar bowls to separate featural processing strategies from configural processing strategies. This served to investigate whether congenital prosopagnosia results from process-specific deficiencies, or whether it is a face-specific impairment. Using a delayed matching paradigm, 6 individuals with CP and 6 matched healthy controls indicated whether an intact test stimulus was the same identity as a previously presented scrambled or blurred cue stimulus. Analyses of d´ values indicated that congenital prosopagnosia is a face-specific deficit, but that this shortcoming is particularly pronounced for processing configural facial information

    Quantitative Analysis of BTF3, HINT1, NDRG1 and ODC1 Protein Over-Expression in Human Prostate Cancer Tissue

    Get PDF
    Prostate carcinoma is the most common cancer in men with few, quantifiable, biomarkers. Prostate cancer biomarker discovery has been hampered due to subjective analysis of protein expression in tissue sections. An unbiased, quantitative immunohistochemical approach provided here, for the diagnosis and stratification of prostate cancer could overcome this problem. Antibodies against four proteins BTF3, HINT1, NDRG1 and ODC1 were used in a prostate tissue array (> 500 individual tissue cores from 82 patients, 41 case pairs matched with one patient in each pair had biochemical recurrence). Protein expression, quantified in an unbiased manner using an automated analysis protocol in ImageJ software, was increased in malignant vs non-malignant prostate (by 2-2.5 fold, p<0.0001). Operating characteristics indicate sensitivity in the range of 0.68 to 0.74; combination of markers in a logistic regression model demonstrates further improvement in diagnostic power. Triple-labeled immunofluorescence (BTF3, HINT1 and NDRG1) in tissue array showed a significant (p<0.02) change in co-localization coefficients for BTF3 and NDRG1 co-expression in biochemical relapse vs non-relapse cancer epithelium. BTF3, HINT1, NDRG1 and ODC1 could be developed as epithelial specific biomarkers for tissue based diagnosis and stratification of prostate cancer

    Felt power explains the link between position power and experienced emotions

    No full text
    The approach/inhibition theory by Keltner, Gruenfeld, and Anderson (2003) predicts that powerful people should feel more positive and less negative emotions. To date, results of studies investigating this prediction are inconsistent. We fill this gap with four studies in which we investigated the role of different conceptualizations of power: felt power and position power. In Study 1, participants were made to feel more or less powerful and we tested how their felt power was related to different emotional states. In Studies 2, 3, and 4, participants were assigned to either a high or a low power role and engaged in an interaction with a virtual human, after which participants reported on how powerful they felt and the emotions they experienced during the interaction. We meta-analytically combined the results of the four studies and found that felt power was positively related to positive emotions (happiness and serenity) and negatively to negative emotions (fear, anger, and sadness), whereas position power did not show any significant overall relation with any of the emotional states. Importantly, felt power mediated the relationship between position power and emotion. In summary, we show that how powerful a person feels in a given social interaction is the driving force linking the person’s position power to his or her emotional states

    Featural, configural, and holistic face-processing strategies evoke different scan patterns

    Full text link
    In two experiments we investigated the role of eye movements during face processing. In experiment 1, using modified faces with primarily featural (scrambled faces) or configural (blurred faces) information as cue stimuli, we manipulated the way participants processed subsequently presented intact faces. In a sequential same-different task, participants decided whether the identity of an intact test face matched a preceding scrambled or blurred cue face. Analysis of eye movements for test faces showed more interfeatural saccades when they followed a blurred face, and longer gaze duration within the same feature when they followed scrambled faces. In experiment 2, we used a similar paradigm except that test faces were cued by intact faces, low-level blurred stimuli, or second-order scrambled stimuli (features were cut out but maintained their first-order relations). We found that in the intact condition participants performed fewer interfeatural saccades than in low-level blurred condition and had shorter gaze duration than in second-order scrambled condition. Moreover, participants fixated the centre of the test face to grasp the information from the whole face. Our findings suggest a differentiation between featural, configural, and holistic processing strategies, which can be associated with specific patterns of eye movements

    Studying social interactions through immersive virtual environment technology: Virtues, pitfalls, and future challenges

    Get PDF
    The goal of the present review is to explain how immersive virtual environment technology (IVET) can be used for the study of social interactions and how the use of virtual humans in immersive virtual environments can advance research and application in many different fields. Researchers studying individual differences in social interactions are typically interested in keeping the behavior and the appearance of the interaction partner constant across participants. With IVET researchers have full control over the interaction partners, can standardize them while still keeping the simulation realistic. Virtual simulations are valid: growing evidence shows that indeed studies conducted with IVET can replicate some well-known findings of social psychology. Moreover, IVET allows researchers to subtly manipulate characteristics of the environment (e.g., visual cues to prime participants) or of the social partner (e.g., his/her race) to investigate their influences on participants' behavior and cognition. Furthermore, manipulations that would be difficult or impossible in real life (e.g., changing participants' height) can be easily obtained with IVET. Beside the advantages for theoretical research, we explore the most recent training and clinical applications of IVET, its integration with other technologies (e.g., social sensing) and future challenges for researchers (e.g., making the communication between virtual humans and participants smoother)
    corecore