13,845 research outputs found

    The other-race effect in children from a multiracial population: A cross-cultural comparison

    Get PDF
    The role of experience with other-race faces on the development of the ORE was investigated through a cross-cultural comparison between 5- to 6-year-old (n = 83) and 13- to 14-year-old (n = 66) children raised in a monoracial (British-White) and a multiracial (Malaysian-Chinese) population. British-White children showed an ORE to three other-race faces (Chinese, Malay, and African-Black) that was stable across age. Malaysian-Chinese children showed recognition deficit for less experienced faces (African-Black) but showed a recognition advantage for faces of which they have direct or indirect experience. Interestingly, younger (Malaysian-Chinese) children showed no ORE for female faces such that they can recognize all female faces regardless of race. These findings point to the importance of early race and gender experiences in re-organizing the face representation to accommodate changes in experience across development

    View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation

    Get PDF
    The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and relatively robust against identity-preserving transformations like depth-rotations. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations. While simulations of these models recapitulate the ventral stream's progression from early view-specific to late view-tolerant representations, they fail to generate the most salient property of the intermediate representation for faces found in the brain: mirror-symmetric tuning of the neural population to head orientation. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules can provide approximate invariance at the top level of the network. While most of the learning rules do not yield mirror-symmetry in the mid-level representations, we characterize a specific biologically-plausible Hebb-type learning rule that is guaranteed to generate mirror-symmetric tuning to faces tuning at intermediate levels of the architecture

    A unified coding strategy for processing faces and voices

    Get PDF
    Both faces and voices are rich in socially-relevant information, which humans are remarkably adept at extracting, including a person's identity, age, gender, affective state, personality, etc. Here, we review accumulating evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies which suggest that the cognitive and neural processing mechanisms engaged by perceiving faces or voices are highly similar, despite the very different nature of their sensory input. The similarity between the two mechanisms likely facilitates the multi-modal integration of facial and vocal information during everyday social interactions. These findings emphasize a parsimonious principle of cerebral organization, where similar computational problems in different modalities are solved using similar solutions

    Putting culture under the spotlight reveals universal information use for face recognition

    Get PDF
    Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used ‘Spotlights’ with Gaussian apertures of 2°, 5° or 8° dynamically centered on observers’ fixations. Strikingly, in constrained Spotlight conditions (2°, 5°) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8°), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture

    Culture shapes how we look at faces

    Get PDF
    Background: Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern. Methodology/Principal Findings: Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face. Conclusions/Significance: These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures

    Reduced face identity aftereffects in relatives of children with autism.

    Get PDF
    Autism is a pervasive developmental condition with complex aetiology. To aid the discovery of genetic mechanisms, researchers have turned towards identifying potential endophenotypes - subtle neurobiological or neurocognitive traits present in individuals with autism and their "unaffected" relatives. Previous research has shown that relatives of individuals with autism exhibit face processing atypicalities, which are similar in nature albeit of lesser degree, to those found in children and adults with autism. Yet very few studies have examined the underlying mechanisms responsible for such atypicalities. Here, we investigated whether atypicalities in adaptive norm-based coding of faces are present in relatives of children with autism, similar to those previously reported in children with autism. To test this possibility, we administered a face identity aftereffect task in which adaptation to a particular face biases perception towards the opposite identity, so that a previously neutral face (i.e., the average face) takes on the computationally opposite identity. Parents and siblings of individuals with autism showed smaller aftereffects compared to parents and siblings of typically developing children, especially so when the adapting stimuli were located further away from the average face. In addition, both groups showed stronger aftereffects for adaptors far from the average than for adaptors closer to the average. These results suggest that, in relatives of children with autism, face-coding mechanism are similar (i.e., norm-based) but less efficient than in relatives of typical children. This finding points towards the possibility that diminished adaptive mechanisms might represent a neurocognitive endophenotype for autism

    Fast Landmark Localization with 3D Component Reconstruction and CNN for Cross-Pose Recognition

    Full text link
    Two approaches are proposed for cross-pose face recognition, one is based on the 3D reconstruction of facial components and the other is based on the deep Convolutional Neural Network (CNN). Unlike most 3D approaches that consider holistic faces, the proposed approach considers 3D facial components. It segments a 2D gallery face into components, reconstructs the 3D surface for each component, and recognizes a probe face by component features. The segmentation is based on the landmarks located by a hierarchical algorithm that combines the Faster R-CNN for face detection and the Reduced Tree Structured Model for landmark localization. The core part of the CNN-based approach is a revised VGG network. We study the performances with different settings on the training set, including the synthesized data from 3D reconstruction, the real-life data from an in-the-wild database, and both types of data combined. We investigate the performances of the network when it is employed as a classifier or designed as a feature extractor. The two recognition approaches and the fast landmark localization are evaluated in extensive experiments, and compared to stateof-the-art methods to demonstrate their efficacy.Comment: 14 pages, 12 figures, 4 table
    • …
    corecore