68,762 research outputs found

    Facial feature representation and recognition

    Get PDF
    Facial expression provides an important behavioral measure for studies of emotion, cognitive processes, and social interaction. Facial expression representation and recognition have become a promising research area during recent years. Its applications include human-computer interfaces, human emotion analysis, and medical care and cure. In this dissertation, the fundamental techniques will be first reviewed, and the developments of the novel algorithms and theorems will be presented later. The objective of the proposed algorithm is to provide a reliable, fast, and integrated procedure to recognize either seven prototypical, emotion-specified expressions (e.g., happy, neutral, angry, disgust, fear, sad, and surprise in JAFFE database) or the action units in CohnKanade AU-coded facial expression image database. A new application area developed by the Infant COPE project is the recognition of neonatal facial expressions of pain (e.g., air puff, cry, friction, pain, and rest in Infant COPE database). It has been reported in medical literature that health care professionals have difficulty in distinguishing newborn\u27s facial expressions of pain from facial reactions of other stimuli. Since pain is a major indicator of medical problems and the quality of patient care depends on the quality of pain management, it is vital that the methods to be developed should accurately distinguish an infant\u27s signal of pain from a host of minor distress signal. The evaluation protocol used in the Infant COPE project considers two conditions: person-dependent and person-independent. The person-dependent means that some data of a subject are used for training and other data of the subject for testing. The person-independent means that the data of all subjects except one are used for training and this left-out one subject is used for testing. In this dissertation, both evaluation protocols are experimented. The Infant COPE research of neonatal pain classification is a first attempt at applying the state-of-the-art face recognition technologies to actual medical problems. The objective of Infant COPE project is to bypass these observational problems by developing a machine classification system to diagnose neonatal facial expressions of pain. Since assessment of pain by machine is based on pixel states, a machine classification system of pain will remain objective and will exploit the full spectrum of information available in a neonate\u27s facial expressions. Furthermore, it will be capable of monitoring neonate\u27s facial expressions when he/she is left unattended. Experimental results using the Infant COPE database and evaluation protocols indicate that the application of face classification techniques in pain assessment and management is a promising area of investigation. One of the challenging problems for building an automatic facial expression recognition system is how to automatically locate the principal facial parts since most existing algorithms capture the necessary face parts by cropping images manually. In this dissertation, two systems are developed to detect facial features, especially for eyes. The purpose is to develop a fast and reliable system to detect facial features automatically and correctly. By combining the proposed facial feature detection, the facial expression and neonatal pain recognition systems can be robust and efficient

    Local feature extraction based facial emotion recognition: a survey

    Get PDF
    Notwithstanding the recent technological advancement, the identification of facial and emotional expressions is still one of the greatest challenges scientists have ever faced. Generally, the human face is identified as a composition made up of textures arranged in micro-patterns. Currently, there has been a tremendous increase in the use of local binary pattern based texture algorithms which have invariably been identified to being essential in the completion of a variety of tasks and in the extraction of essential attributes from an image. Over the years, lots of LBP variants have been literally reviewed. However, what is left is a thorough and comprehensive analysis of their independent performance. This research work aims at filling this gap by performing a large-scale performance evaluation of 46 recent state-of-the-art LBP variants for facial expression recognition. Extensive experimental results on the well-known challenging and benchmark KDEF, JAFFE, CK and MUG databases taken under different facial expression conditions, indicate that a number of evaluated state-of-the-art LBP-like methods achieve promising results, which are better or competitive than several recent state-of-the-art facial recognition systems. Recognition rates of 100%, 98.57%, 95.92% and 100% have been reached for CK, JAFFE, KDEF and MUG databases, respectively

    Pose-disentangled Contrastive Learning for Self-supervised Facial Representation

    Full text link
    Self-supervised facial representation has recently attracted increasing attention due to its ability to perform face understanding without relying on large-scale annotated datasets heavily. However, analytically, current contrastive-based self-supervised learning still performs unsatisfactorily for learning facial representation. More specifically, existing contrastive learning (CL) tends to learn pose-invariant features that cannot depict the pose details of faces, compromising the learning performance. To conquer the above limitation of CL, we propose a novel Pose-disentangled Contrastive Learning (PCL) method for general self-supervised facial representation. Our PCL first devises a pose-disentangled decoder (PDD) with a delicately designed orthogonalizing regulation, which disentangles the pose-related features from the face-aware features; therefore, pose-related and other pose-unrelated facial information could be performed in individual subnetworks and do not affect each other's training. Furthermore, we introduce a pose-related contrastive learning scheme that learns pose-related information based on data augmentation of the same image, which would deliver more effective face-aware representation for various downstream tasks. We conducted a comprehensive linear evaluation on three challenging downstream facial understanding tasks, i.e., facial expression recognition, face recognition, and AU detection. Experimental results demonstrate that our method outperforms cutting-edge contrastive and other self-supervised learning methods with a great margin

    Recognizing faces prone to occlusions and common variations using optimal face subgraphs

    Get PDF
    An intuitive graph optimization face recognition approach called Harmony Search Oriented-EBGM (HSO-EBGM) inspired by the classical Elastic Bunch Graph Matching (EBGM) graphical model is proposed in this contribution. In the proposed HSO-EBGM, a recent evolutionary approach called harmony search optimization is tailored to automatically determine optimal facial landmarks. A novel notion of face subgraphs have been formulated with the aid of these automated landmarks that maximizes the similarity entailed by the subgraphs. For experimental evaluation, two sets of de facto databases (i.e., AR and Face Recognition Grand Challenge (FRGC) ver2.0) are used to validate and analyze the behavior of the proposed HSO-EBGM in terms of number of subgraphs, varying occlusion sizes, face images under controlled/ideal conditions, realistic partial occlusions, expression variations and varying illumination conditions. For a number of experiments, results justify that the HSO-EBGM shows improved recognition performance when compared to recent state-of-the-art face recognition approaches

    The emotional valence of subliminal priming effects perception of facial expressions

    Full text link
    We investigated, in young healthy subjects, how the affective content of subliminally presented priming images and their specific visual attributes impacted conscious perception of facial expressions. The priming images were broadly categorised as aggressive, pleasant, or neutral and further subcategorised by the presence of a face and by the centricity (egocentric or allocentric vantage-point) of the image content. Subjects responded to the emotion portrayed in a pixelated target-face by indicating via key-press if the expression was angry or neutral. Priming images containing a face compared to those not containing a face significantly impaired performance on neutral or angry targetface evaluation. Recognition of angry target-face expressions was selectively impaired by pleasant prime images which contained a face. For egocentric primes, recognition of neutral target-face expressions was significantly better than of angry expressions. Our results suggest that, first, the affective primacy hypothesis which predicts that affective information can be accessed automatically, preceding conscious cognition, holds true in subliminal priming only when the priming image contains a face. Second, egocentric primes interfere with the perception of angry target-face expressions suggesting that this vantage-point, directly relevant to the viewer, perhaps engages processes involved in action preparation which may weaken the priority of affect processing.Accepted manuscrip
    • …
    corecore