61 research outputs found

    A shape-based account for holistic face processing

    Get PDF
    Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive discrimination experience. Results show that facial shape information alone is sufficient to elicit the composite face effect (CFE), 1 of the most convincing demonstrations of holistic processing, whereas facial surface information is unnecessary (Experiment 1). The CFE is eliminated when faces differ only in surface but not shape information, suggesting that variation of facial shape information is necessary to observe holistic face processing (Experiment 2). Removing 3-dimensional (3D) facial shape information also eliminates the CFE, indicating the necessity of 3D shape information for holistic face processing (Experiment 3). Moreover, participants show similar holistic processing for faces with and without extensive discrimination experience (i.e., own- and other-race faces), suggesting that generalization of holistic processing to nonexperienced faces requires facial shape information, but does not necessarily require further individuation experience. These results provide compelling evidence that facial shape information underlies holistic face processing. This shape-based account not only offers a consistent explanation for previous studies of holistic face processing, but also suggests a new ground-in addition to expertise-for the generalization of holistic processing to different types of faces and to nonface objects

    Beyond Faces and Expertise:Facelike Holistic Processing of Nonface Objects in the Absence of Expertise

    Get PDF
    Holistic processing-the tendency to perceive objects as indecomposable wholes-has long been viewed as a process specific to faces or objects of expertise. Although current theories differ in what causes holistic processing, they share a fundamental constraint for its generalization: Nonface objects cannot elicit facelike holistic processing in the absence of expertise. Contrary to this prevailing view, here we show that line patterns with salient Gestalt information (i.e., connectedness, closure, and continuity between parts) can be processed as holistically as faces without any training. Moreover, weakening the saliency of Gestalt information in these patterns reduced holistic processing of them, which indicates that Gestalt information plays a crucial role in holistic processing. Therefore, holistic processing can be achieved not only via a top-down route based on expertise, but also via a bottom-up route relying merely on object-based information. The finding that facelike holistic processing can extend beyond the domains of faces and objects of expertise poses a challenge to current dominant theories

    Holistic processing of static and moving faces

    Get PDF
    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Recor

    Personally familiar faces: Higher precision of memory for idiosyncratic than for categorical information

    Get PDF
    Many studies have demonstrated that we can identify a familiar face on an image much better than an unfamiliar one, especially when various degradations or changes (e. g. image distortions or blurring, new illuminations) have been applied, but few have asked how different types of facial information from familiar faces are stored in memory. Here we investigated how well we remember personally familiar faces in terms of their identity, gender, and race. In three experiments, based on the faces personally familiar to our participants, we created sets of face morphs that parametrically varied the faces in terms of identity, sex or race, using a 3-dimensional morphable face model. For each familiar face, we presented those face morphs together with the original face and asked participants to pick the correct “real” face among morph distracters in each set. They were instructed to pick the face that most closely resembled their memory of that familiar person. We found that participants excelled in retrieving the correct familiar faces among the distracters when the faces were manipulated in terms of their idiosyncratic features (their identity information), but they were less sensitive to changes that occurred along the gender and race continuum. Image similarity analyses indicate that the observed difference cannot be attributed to different levels of image similarity between manipulations. These findings demonstrate that idiosyncratic and categorical face information is represented differently in memory, even for the faces of people we are very familiar with. Implications to current models of face recognition are discussed

    Average faces: How does the averaging process change faces physically and perceptually?

    Get PDF
    Average faces have been used frequently in face recognition studies, either as a theoretical concept (e.g., face norm) or as a tool to manipulate facial attributes (e.g., modifying identity strength). Nonetheless, how the face averaging process— the creation of average faces using an increasing number of faces —changes the resulting averaged faces and our ability to differentiate between them remains to be elucidated. Here we addressed these questions by combining 3D-face averaging, eye-movement tracking, and the computation of image-based face similarity. Participants judged whether two average faces showed the same person while we systematically increased their average level (i.e., number of faces being averaged). Our results showed, with increasing averaging, both a nonlinear increase of the computational similarity between the resulting average faces and a nonlinear decrease of face discrimination performance. Participants' performance dropped from near-ceiling level when two different faces had been averaged together to chance level when 80 faces were mixed. We also found a nonlinear relationship between face similarity and face discrimination performance, which was fitted nicely with an exponential function. Furthermore, when the comparison task became more challenging, participants performed more fixations onto the faces. Nonetheless, the distribution of fixations across facial features (eyes, nose, mouth, and the center area of a face) remained unchanged. These results not only set new constraints on the theoretical characterization of the average face and its role in establishing face norms but also offer practical guidance for creating approximated face norms to manipulate face identity

    Varying sex and identity of faces affects face categorization differently in humans and computational models

    Get PDF
    Our faces display socially important sex and identity information. How perceptually independent are these facial characteristics? Here, we used a sex categorization task to investigate how changing faces in terms of either their sex or identity affects sex categorization of those faces, whether these manipulations affect sex categorization similarly when the original faces were personally familiar or unknown, and, whether computational models trained for sex classification respond similarly to human observers. Our results show that varying faces along either sex or identity dimension affects their sex categorization. When the sex was swapped (e.g., female faces became male looking, Experiment 1), sex categorization performance was different from that with the original unchanged faces, and significantly more so for people who were familiar with the original faces than those who were not. When the identity of the faces was manipulated by caricaturing or anti-caricaturing them (these manipulations either augment or diminish idiosyncratic facial information, Experiment 2), sex categorization performance to caricatured, original, and anti-caricatured faces increased in that order, independently of face familiarity. Moreover, our face manipulations showed different effects upon computational models trained for sex classification and elicited different patterns of responses in humans and computational models. These results not only support the notion that the sex and identity of faces are processed integratively by human observers but also demonstrate that computational models of face categorization may not capture key characteristics of human face categorization

    The neural coding of face and body orientation in occipitotemporal cortex

    Get PDF
    Face and body orientation convey important information for us to understand other people's actions, intentions and social interactions. It has been shown that several occipitotemporal areas respond differently to faces or bodies of different orientations. However, whether face and body orientation are processed by partially overlapping or completely separate brain networks remains unclear, as the neural coding of face and body orientation is often investigated separately. Here, we recorded participants’ brain activity using fMRI while they viewed faces and bodies shown from three different orientations, while attending to either orientation or identity information. Using multivoxel pattern analysis we investigated which brain regions process face and body orientation respectively, and which regions encode both face and body orientation in a stimulus-independent manner. We found that patterns of neural responses evoked by different stimulus orientations in the occipital face area, extrastriate body area, lateral occipital complex and right early visual cortex could generalise across faces and bodies, suggesting a stimulus-independent encoding of person orientation in occipitotemporal cortex. This finding was consistent across functionally defined regions of interest and a whole-brain searchlight approach. The fusiform face area responded to face but not body orientation, suggesting that orientation responses in this area are face-specific. Moreover, neural responses to orientation were remarkably consistent regardless of whether participants attended to the orientation of faces and bodies or not. Together, these results demonstrate that face and body orientation are processed in a partially overlapping brain network, with a stimulus-independent neural code for face and body orientation in occipitotemporal cortex

    Holistic processing, contact, and the other-race effect in face recognition

    Get PDF
    Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing

    Funktionelle Prinzipien der Objekt- und Gesichtserkennung

    No full text

    Cortical Representation of Tactile Stickiness Evoked by Skin Contact and Glove Contact

    No full text
    Even when we are wearing gloves, we can easily detect whether a surface that we are touching is sticky or not. However, we know little about the similarities between brain activations elicited by this glove contact and by direct contact with our bare skin. In this functional magnetic resonance imaging (fMRI) study, we investigated which brain regions represent stickiness intensity information obtained in both touch conditions, i.e., skin contact and glove contact. First, we searched for neural representations mediating stickiness for each touch condition separately and found regions responding to both mainly in the supramarginal gyrus and the secondary somatosensory cortex. Second, we explored whether surface stickiness is encoded in common neural patterns irrespective of how participants touched the sticky stimuli. Using a cross-condition decoding method, we tested whether the stickiness intensities could be decoded from fMRI signals evoked by skin contact using a classifier trained on the responses elicited by glove contact, and vice versa. Our results found shared neural encoding patterns in the bilateral angular gyri and the inferior frontal gyrus (IFG) and suggest that these areas represent stickiness intensity information regardless of how participants touched the sticky stimuli. Interestingly, we observed that neural encoding patterns of these areas were reflected in participants’ intensity ratings. This study revealed common and distinct brain activation patterns of tactile stickiness using two different touch conditions, which may broaden the understanding of neural mechanisms related to surface texture perception
    corecore