9,052 research outputs found

    Putting culture under the spotlight reveals universal information use for face recognition

    Get PDF
    Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used ‘Spotlights’ with Gaussian apertures of 2°, 5° or 8° dynamically centered on observers’ fixations. Strikingly, in constrained Spotlight conditions (2°, 5°) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8°), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture

    Automatic human face detection for content-based image annotation

    Get PDF
    In this paper, an automatic human face detection approach using colour analysis is applied for content-based image annotation. In the face detection, the probable face region is detected by adaptive boosting algorithm, and then combined with a colour filtering classifier to enhance the accuracy in face detection. The initial experimental benchmark shows the proposed scheme can be efficiently applied for image annotation with higher fidelity

    Unitization during Category Learning

    Get PDF
    Five experiments explored the question of whether new perceptual units can be developed if they are diagnostic for a category learning task, and if so, what are the constraints on this unitization process? During category learning, participants were required to attend either a single component or a conjunction of five components in order to correctly categorize an object. In Experiments 1-4, some evidence for unitization was found in that the conjunctive task becomes much easier with practice, and this improvement was not found for the single component task, or for conjunctive tasks where the components cannot be unitized. Influences of component order (Experiment 1), component contiguity (Experiment 2), component proximity (Experiment 3), and number of components (Experiment 4) on practice effects were found. Using a Fourier Transformation method for deconvolving response times (Experiment 5), prolonged practice effects yielded responses that were faster than expected by analytic model that integrate evidence from independently perceived components

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    A comparative analysis of neural and statistical classifiers for dimensionality reduction-based face recognition systems.

    Get PDF
    Human face recognition has received a wide range of attention since 1990s. Recent approaches focus on a combination of dimensionality reduction-based feature extraction algorithms and various types of classifiers. This thesis provides an in depth comparative analysis of neural and statistical classifiers by combining them with existing dimensionality reduction-based algorithms. A set of unified face recognition systems were established for evaluating alternate combinations in terms of recognition performance, processing time, and conditions to achieve certain performance levels. A preprocessing system and four dimensionality reduction-based methods based on Principal Component Analysis (PCA), Two-dimensional PCA, Fisher\u27s Linear Discriminant and Laplacianfaces were utilized and implemented. Classification was achieved by using various types of classifiers including Euclidean Distance, MLP neural network, K-nearest-neighborhood classifier and Fuzzy K-Nearest Neighbor classifier. The statistical model is relatively simple and requires less computation complexity and storage. Experimental results were shown after the algorithms were tested on two databases of known individuals, Yale and AR database. After comparing these algorithms in every aspect, the results of the simulations showed that considering recognition rates, generalization ability, classification performance, the power of noise immunity and processing time, the best results were obtained with the Laplacianfaces, using either Fuzzy K-NN.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2006 .X86. Source: Masters Abstracts International, Volume: 45-01, page: 0428. Thesis (M.A.Sc.)--University of Windsor (Canada), 2006

    Brief Training to Modify the Breadth of Attention Influences the Generalisation of Fear

    Get PDF
    Background: Generalisation of fear from dangerous to safe stimuli is an important process associated with anxiety disorders. However, factors that contribute towards fear (over)-generalisation remain poorly understood. The present investigation explored how attentional breadth (global/holistic and local/analytic) influences fear generalisation and, whether people trained to attend in a global vs. local manner show more or less generalisation. Methods: Participants (N = 39) were shown stimuli which comprised of large ‘global’ letters and smaller ‘local’ letters (e.g. an F comprised of As) and they either had to identify the global or local letter. Participants were then conditioned to fear a face by pairing it with an aversive scream (75% reinforcement schedule). Perceptually similar, but safe, faces, were then shown. Self-reported fear levels and skin conductance responses were measured. Results: Compared to participants in Global group, participants in Local group demonstrated greater fear for dangerous stimulus (CS +) as well as perceptually similar safe stimuli. Conclusions: Participants trained to attend to stimuli in a local/analytical manner showed higher magnitude of fear acquisition and generalisation than participants trained to attend in a global/holistic way. Breadth of attentional focus can influence overall fear levels and fear generalisation and this can be manipulated via attentional training

    Holistic processing of static and moving faces

    Get PDF
    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Recor

    EMERGING HOLISTIC PROPERTIES AT FACE VALUE: ASSESSING CHARACTERISTICS OF FACE PERCEPTION

    Get PDF
    Thesis (PhD) - Indiana University, Psychology, 2005Holistic face recognition refers to the ability of human cognitive systems to deal in an integrative manner with separate face features. A holistic mental representation of a face is not a simple sum of face parts. It possesses unitary properties and corresponds to the whole face appearance better than to any of its constituent parts. A single face feature is better recognized in the learned face context (e.g. Bill's nose in Bill's face) than in isolation or in a new face context (e.g. Bill's nose in Joe's face; Tanaka & Sengco, 1997). The major goal of this study is to provide a rigorous test of the structure and organization of cognitive processes in the holistic perception of faces. Participants performed in two types of face categorization tasks that utilized either a self-terminating or an exhaustive rule for search (OR and AND conditions). Category membership was determined by the manipulation of two configural properties: eye-separation and lips-position. In the first part of each study, participants learned two groups of faces, and we monitored the changes in the face recognition system architecture and capacity. In the second part, the participants' task was to recognize the learned configurations of face features, presented in different face contexts: in the old learned faces, in a new face background and in isolation. Using the systems factorial theory tests, combined with statistical analyses and model simulations, we were able to reveal the exact organization of the mental processes underlying face perception. The findings supported a view that holism is an emergent property which develops with learning. Overall, processing exhibited a parallel architecture with positive interdependency between features in both the OR and AND conditions. We also found that face units are better recognized in the learned face condition than in both the new face context and isolation conditions. We showed that faces are recognized not as a set of independent face features, but as whole units. We revealed that the cognitive mechanism of positive dependence between face features is responsible for forming holistic faces, and provided a simulation that produced behaviors similar to the experimental observations
    corecore