11 research outputs found

    Caucasian Infants’ Attentional Orienting to Own- and Other-Race Faces

    Get PDF
    Infants show preferential attention toward faces and detect faces embedded within complex naturalistic scenes. Newborn infants are insensitive to race, but rapidly develop differential processing of own- and other-race faces. In the present study, we investigated the development of attentional orienting toward own- and other-race faces embedded within naturalistic scenes. Infants aged six-, nine- and twelve-months did not show differences in the speed of orienting to own- and other race faces, but other-race faces held infants’ visual attention for longer. We also found a clear developmental progression in attentional capture and holding, with older infants orienting to faces faster and fixating them for longer. Results are interpreted within the context of the two-process model of face processing

    Infants show pupil dilatory responses to happy and angry facial expressions

    Get PDF
    Facial expressions are one way in which infants and adults communicate emotion. Infants scan expressions similarly to adults, yet it remains unclear whether they are receptive to the affective information they convey. The current study investigates six-, nine- and twelve-month infants’ (N = 146) pupillary responses to the six ‘basic’ emotional expressions (happy, sad, surprise, fear, anger, and disgust). To do this we use dynamic stimuli and gaze-contingent eye-tracking to simulate brief interactive exchanges, alongside a static control condition. Infants’ arousal responses were stronger for dynamic compared to static stimuli. And for dynamic stimuli we found that, compared to neutral, infants showed dilatory responses for happy and angry expressions only. Although previous work has shown infants can discriminate perceptually between facial expressions, our data suggest that sensitivity to the affective content of all six basic emotional expressions may not fully emerge until later in ontogeny

    Capacity limits in face detection

    Get PDF
    Face detection is a prerequisite for further face processing, such as extracting identity or semantic information. Those later processes appear to be subject to strict capacity limits, but the location of the bottleneck is unclear. In particular, it is not known whether the bottleneck occurs before or after face detection. Here we present a novel test of capacity limits in face detection. Across four behavioural experiments, we assessed detection of multiple faces via observers' ability to differentiate between two types of display. Fixed displays comprised items of the same type (all faces or all non-faces). Mixed displays combined faces and non-faces. Critically, a ‘ xed’ response requires all items to be processed. We found that additional faces could be detected with no cost to ef ciency, and that this capacity-free performance was contingent on visual context. The observed pattern was not speci c to faces, but detection was more ef cient for faces overall. Our ndings suggest that strict capacity limits in face perception occur after the detection step

    Ingroup and outgroup differences in face detection

    Get PDF
    Humans show improved recognition for faces from their own social group relative to faces from another social group. Yet before faces can be recognized, they must first be detected in the visual field. Here, we tested whether humans also show an ingroup bias at the earliest stage of face processing – the point at which the presence of a face is first detected. To this end, we measured viewers' ability to detect ingroup (Black and White) and outgroup faces (Asian, Black, and White) in everyday scenes. Ingroup faces were detected with greater speed and accuracy relative to outgroup faces (Experiment 1). Removing face hue impaired detection gen- erally, but the ingroup detection advantage was undimin- ished (Experiment 2). This same pattern was replicated by a detection algorithm using face templates derived from human data (Experiment 3). These findings demonstrate that the established ingroup bias in face processing can ex- tend to the early process of detection. This effect is ‘colour blind’, in the sense that group membership effects are inde- pendent of general effects of image hue. Moreover, it can be captured by tuning visual templates to reflect the statistics of observers' social experience. We conclude that group bias in face detection is both a visual and a social phenomenon

    Understanding face detection with visual arrays and real-world scenes

    Get PDF
    Face detection has been studied by presenting faces in blank displays, object arrays, and real-world scenes. This study investigated whether these display contexts differ in what they can reveal about detection, by comparing frontal-view faces with those shown in profile (Experiment 1), rotated by 90Âș (Experiment 2), or turned upside-down (Experiment 3). In blank displays, performance for all face conditions was equivalent, whereas upright frontal faces showed a consistent detection advantage in arrays and scenes. Experiment 4 examined which facial characteristics drive this detection advantage by rotating either the internal or external facial features by 90Âș while the other features remained upright. Faces with rotated internal features were detected as efficiently as their intact frontal counterparts, whereas detection was impaired when external features were rotated. Finally, Experiment 5 applied Voronoi transformations to scenes to confirm that complexity of stimulus displays modulates the detection advantage for upright faces. These experiments demonstrate that context influences what can be learned about the face detection process. In complex visual arrays and natural scenes, detection proceeds more effectively when external facial features are preserved in an upright orientation. These findings are consistent with a cognitive detection template that focuses on general face-shape information

    Infants scan static and dynamic facial expressions differently

    No full text
    Despite being inherently dynamic phenomena, much of our understanding of how infants attend and scan facial expressions is based on static face stimuli. Here we investigate how six-, nine-, and twelve-month infants allocate their visual attention toward dynamic interactive videos of the six basic emotional expressions, and compare their responses with static images of the same stimuli. We find infants show clear differences in how they attend and scan dynamic and static expressions, looking longer toward the dynamic-face and lower-face regions. Infants across all age groups show differential interest in expressions, and show precise scanning of regions “diagnostic” for emotion recognition. These data also indicate that infants' attention toward dynamic expressions develops over the first year of life, including relative increases in interest and scanning precision toward some negative facial expressions (e.g., anger, fear, and disgust)

    A cognitive template for human face detection

    Get PDF
    Faces are highly informative social stimuli, yet before any information can be accessed, the face must first be detected in the visual field. A detection template that serves this purpose must be able to accommodate the wide variety of face images we encounter, but how this generality could be achieved remains unknown. In this study, we investigate whether statistical averages of previously encountered faces can form the basis of a general face detection template. We provide converging evidence from a range of methods-human similarity judgements and PCA-based image analysis of face averages (Experiment 1-3), human detection behaviour for faces embedded in complex scenes (Experiment 4 and 5), and simulations with a template-matching algorithm (Experiment 6 and 7)-to examine the formation, stability and robustness of statistical image averages as cognitive templates for human face detection. We integrate these findings with existing knowledge of face identification, ensemble coding, and the development of face perception
    corecore