11 research outputs found

    The obligatory nature of holistic processing of faces in social judgments

    Get PDF
    Contains fulltext : 133506.pdf (publisher's version ) (Open Access)Using a composite-face paradigm, we show that social judgments from faces rely on holistic processing. Participants judged facial halves more positively when aligned with trustworthy than with untrustworthy halves, despite instructions to ignore the aligned parts (experiment 1). This effect was substantially reduced when the faces were inverted (experiments 2 and 3) and when the halves were misaligned (experiment 3). In all three experiments, judgments were affected to a larger extent by the to-be-attended than the to-be-ignored halves, suggesting that there is partial control of holistic processing. However, after rapid exposures to faces (33 to 100 ms), judgments of trustworthy and untrustworthy halves aligned with incongruent halves were indistinguishable (experiment 4a). Differences emerged with exposures longer than 100 ms. In contrast, when participants were not instructed to attend to specific facial parts, these differences did not emerge (experiment 4b). These findings suggest that the initial pass of information is holistic and that additional time allows participants to partially ignore the task-irrelevant context

    Modeling Social Perception of Faces

    No full text
    Item does not contain fulltextThe face is our primary source of visual information for identifying people and reading their emotional and mental states. With the exception of prosopagnosics (who are unable to recognize faces) and those suffering from such disorders of social cognition as autism, people are extremely adept at these two tasks. However, our cognitive powers in this regard come at the price of reading too much into the human face. The face is often treated as a window into a person's true nature. Given the agreement in social perception of faces, this paper discusses that it should be possible to model this perception

    Surface-based information mapping reveals crossmodal vision-action representations in human parietal and occipitotemporal cortex.

    No full text
    Many lines of evidence point to a tight linkage between the perceptual and motoric representations of actions. Numerous demonstrations show how the visual perception of an action engages compatible activity in the observer's motor system. This is seen for both intransitive actions (e.g., in the case of unconscious postural imitation) and transitive actions (e.g., grasping an object). Although the discovery of “mirror neurons” in macaques has inspired explanations of these processes in human action behaviors, the evidence for areas in the human brain that similarly form a crossmodal visual/motor representation of actions remains incomplete. To address this, in the present study, participants performed and observed hand actions while being scanned with functional MRI. We took a data-driven approach by applying whole-brain information mapping using a multivoxel pattern analysis (MVPA) classifier, performed on reconstructed representations of the cortical surface. The aim was to identify regions in which local voxelwise patterns of activity can distinguish among different actions, across the visual and motor domains. Experiment 1 tested intransitive, meaningless hand movements, whereas experiment 2 tested object-directed actions (all right-handed). Our analyses of both experiments revealed crossmodal action regions in the lateral occipitotemporal cortex (bilaterally) and in the left postcentral gyrus/anterior parietal cortex. Furthermore, in experiment 2 we identified a gradient of bias in the patterns of information in the left hemisphere postcentral/parietal region. The postcentral gyrus carried more information about the effectors used to carry out the action (fingers vs. whole hand), whereas anterior parietal regions carried more information about the goal of the action (lift vs. punch). Taken together, these results provide evidence for common neural coding in these areas of the visual and motor aspects of actions, and demonstrate further how MVPA can contribute to our understanding of the nature of distributed neural representations

    Validation of data-driven computational models of social perception of faces

    No full text
    People rapidly form impressions from facial appearance, and these impressions affect social decisions. We argue that data-driven, computational models are the best available tools for identifying the source of such impressions. Here we validate seven computational models of social judgments of faces: attractiveness, competence, dominance, extroversion, likability, threat, and trustworthiness. The models manipulate both face shape and reflectance (i.e., cues such as pigmentation and skin smoothness). We show that human judgments track the models’ predictions (Experiment 1) and that the models differentiate between different judgments, though this differentiation is constrained by the similarity of the models (Experiment 2). We also make the validated stimuli available for academic research: seven databases containing 25 identities manipulated in the respective model to take on seven different dimension values, ranging from −3 SD to +3 SD (175 stimuli in each database). Finally, we show how the computational models can be used to control for shared variance of the models. For example, even for highly correlated dimensions (e.g., dominance and threat), we can identify cues specific to each dimension and, consequently, generate faces that vary only on these cues

    Facial and Vocal Cues in Perceptions of Trustworthiness

    Get PDF
    Abstract. The goal of the present research was to study the relative role of facial and acoustic cues in the formation of trustworthiness impressions. Furthermore, we investigated the relationship between perceived trustworthiness and perceivers ’ confidence in their judgments. 25 young adults watched a number of short clips in which the video and audio channel were digitally aligned to form five different combinations of actors’ face and voice trustworthiness levels (neutral face + neutral voice, neutral face + trustworthy voice, neutral face + non-trustworthy voice, trustworthy face + neutral voice, and non-trustworthy face + neutral voice). Participants provided subjective ratings of the trustworthiness of the actor in each video, and indicated their level of confidence in each of those ratings. Results revealed a main effect of face-voice channel combination on trustworthiness ratings, and no significant effect of channel combination on confidence ratings. We conclude that there is a clear superiority effect of facial over acoustic cues in the formation of trustworthiness impressions, propose a method for future investigation of the judgment-confidence link, and outline the practical implications of the experiment.
    corecore