28,327 research outputs found

    MIXTURE FEATURE EXTRACTION BASED ON LOCAL BINARY PATTERN AND GREY-LEVEL CO-OCCURRENCE MATRIX TECHNIQUES FOR MOUTH EXPRESSION RECOGNITION

    Get PDF
    Some academics struggle to recognize facial emotions based on pattern recognition. In general, this recognition utilizes all facial features. However, this study was limited to identifying facial emotions in a single facial region. In this study, lips, one of the facial features that can reveal a person's expression, are utilized. Using a combination of local binary pattern feature extraction (LBP) and grey level co-occurrence matrix (GLCM) methods and a multiclass support vector machine classification approach for feature extraction in facial images. The concept begins with image segmentation to create an image of a mouth. Experiments were also conducted for various tests, and the outcomes of these experiments revealed a recognition performance of up to 95%. This result was obtained through experiments in which 10% to 40% of the data were evaluated. These findings are beneficial and can be applied to expression recognition in online learning media to monitor the audience's condition directly

    Neural Network-Based Expression Recognition System for Static Facial Images

    Get PDF
    Affective Computing is a field of studying the human effect to interpret, recognize, process, and simulate in computer science, psychology, and cognitive science. Humans express their emotions in a variety of ways such as body gesture, word, vocal, and mainly facial expression. Non-verbal behavior is a significant component of communication, and facial expressions of emotions are the most important complex signal. Facial Expression Recognition (FER) is an interesting and challenging task in artificial intelligence. FER system in the study three steps including preprocessing, feature extraction and expression classification. In the paper, comparative analysis of expression recognition is implemented based on Neural Network (NN) with three feature extraction methods of Sobel Edge, Histogram of Oriented Gradient and Local Binary Pattern. NN-based expression recognition system achieves an accuracy of 95.82% and 97.68% for JAFFE and CK+ dataset respectively. The result has shown that the Edge features are the effected features for recognizing human expression using still images

    Feature extraction comparison for facial expression recognition using adaptive extreme learning machine

    Get PDF
    Facial expression recognition is an important part in the field of affective computing. Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypes emotional expressions such as anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. This paper aims to compare feature extraction methods that are used to detect human facial expression. The study compares the gray level co-occurrence matrix, local binary pattern, and facial landmark (FL) with two types of facial expression datasets, namely Japanese female facial expression (JFFE), and extended Cohn-Kanade (CK+). In addition, we also propose an enhancement of extreme learning machine (ELM) method that can adaptively select best number of hidden neurons adaptive ELM (aELM) to reach its maximum performance. The result from this paper is our proposed method can slightly improve the performance of basic ELM method using some feature extractions mentioned before. Our proposed method can obtain maximum mean accuracy score of 88.07% on CK+ dataset, and 83.12% on JFFE dataset with FL feature extraction

    FER Based on Fusion Features of CS-LSMP

    Get PDF
    Local feature descriptors play a fundamental and important role in facial expression recognition. This paper presents a new descriptor, Center-Symmetric Local Signal Magnitude Pattern (CS-LSMP), which is used for extracting texture features from facial images. CS-LSMP operator takes signal and magnitude information of local regions into account compared to conventional LBP-based operators. Additionally, due to the limitation of single feature extraction method and in order to make full advantages of different features, this paper employs CS-LSMP operator to extract features from Orientational Magnitude Feature Maps (OMFMs), Positive-and-Negative Magnitude Feature Maps (PNMFMs), Gabor Feature Maps (GFMs) and facial patches (eyebrows-eyes, mouths) for obtaining fused features. Unlike HOG, which only retains horizontal and vertical magnitudes, our work generates Orientational Magnitude Feature Maps (OMFMs) by expanding multi-orientations. This paper build two distinct feature maps by dividing local magnitudes into two groups, i.e., positive and negative magnitude feature maps. The generated Gabor Feature Maps (GFMs) are also grouped to reduce the computational complexity. Experiments on the JAFFE and CK+ facial expression datasets showed that the proposed framework achieved significant improvement and outperformed some state-of-the-art methods

    Facial Expression Recognition System

    Get PDF
    This thesis describes the problem of facial expression recognition in the field of computer vision. Firstly, the psychological background of a problem is presented. Then, the idea of facial expression recognition system (FERS) is outlined and the requirements of such system are specified. The FER system consists of 3 stages: face detection, feature extraction and expression recognition. Methods proposed in literature are reviewed for each stage of a system. Finally, the design and implementation of my system are explained. The face detection algorithm used in the system is based on work by Viola and Jones [13]. The expressions are described by appearance features obtained from texture encoded with Local Binary Patterns [32]. The Support Vector Machine with RBF kernel function is used for classification. Databases that were used are: The Facial Expressions and Emotion Database [34], which contains spontaneous emotions and Cohn- Kanade Database [35] with posed emotions. The system was trained on two databases separately and achieves accuracy of 71% for spontaneous emotions recognition and 77% for posed actions recognition

    Feature extraction and classification stage on facial expression : A review

    Get PDF
    Human facial expression becomes an important technology in recent years. As information technology and networks have grown, identification and authentication have become more frequent in people's daily lives, especially using biometric technology. Human facial recognition involves face detection, feature extraction, and classification. A lot of experiments showed that there are various techniques for extracting facial features and classifying facial expressions. This paper reviews and analyze the various optimization techniques on extract feature and classification stage for human facial expression recognition. This review will compare two kinds of extract features methods and one classification method. The first technique of extracting features is the optimization technique using the K-Mean algorithm, which helps to increase recognition accuracy. The second extract feature is an optimization technique using improved Gradient Local Ternary Pattern (GLTP) which is beneficial for computational resources efficiency. Lastly, the optimization technique for image classification using a three-staged Support Vector Machine (SVM) is very helpful for increasing accuracy and eliminating error. The modified GLTP is able to obtain an accuracy of 97%

    Out-of-plane action unit recognition using recurrent neural networks

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2015.The face is a fundamental tool to assist in interpersonal communication and interaction between people. Humans use facial expressions to consciously or subconsciously express their emotional states, such as anger or surprise. As humans, we are able to easily identify changes in facial expressions even in complicated scenarios, but the task of facial expression recognition and analysis is complex and challenging to a computer. The automatic analysis of facial expressions by computers has applications in several scientific subjects such as psychology, neurology, pain assessment, lie detection, intelligent environments, psychiatry, and emotion and paralinguistic communication. We look at methods of facial expression recognition, and in particular, the recognition of Facial Action Coding System’s (FACS) Action Units (AUs). Movements of individual muscles on the face are encoded by FACS from slightly different, instant changes in facial appearance. Contractions of specific facial muscles are related to a set of units called AUs. We make use of Speeded Up Robust Features (SURF) to extract keypoints from the face and use the SURF descriptors to create feature vectors. SURF provides smaller sized feature vectors than other commonly used feature extraction techniques. SURF is comparable to or outperforms other methods with respect to distinctiveness, robustness, and repeatability. It is also much faster than other feature detectors and descriptors. The SURF descriptor is scale and rotation invariant and is unaffected by small viewpoint changes or illumination changes. We use the SURF feature vectors to train a recurrent neural network (RNN) to recognize AUs from the Cohn-Kanade database. An RNN is able to handle temporal data received from image sequences in which an AU or combination of AUs are shown to develop from a neutral face. We are recognizing AUs as they provide a more fine-grained means of measurement that is independent of age, ethnicity, gender and different expression appearance. In addition to recognizing FACS AUs from the Cohn-Kanade database, we use our trained RNNs to recognize the development of pain in human subjects. We make use of the UNBC-McMaster pain database which contains image sequences of people experiencing pain. In some cases, the pain results in their face moving out-of-plane or some degree of in-plane movement. The temporal processing ability of RNNs can assist in classifying AUs where the face is occluded and not facing frontally for some part of the sequence. Results are promising when tested on the Cohn-Kanade database. We see higher overall recognition rates for upper face AUs than lower face AUs. Since keypoints are globally extracted from the face in our system, local feature extraction could provide improved recognition results in future work. We also see satisfactory recognition results when tested on samples with out-of-plane head movement, showing the temporal processing ability of RNNs

    A Machine Learning Approach for Expression Detection in Healthcare Monitoring Systems

    Get PDF
    Expression detection plays a vital role to determine the patient’s condition in healthcare systems. It helps the monitoring teams to respond swiftly in case of emergency. Due to the lack of suitable methods, results are often compromised in an unconstrained environment because of pose, scale, occlusion and illumination variations in the image of the face of the patient. A novel patch-based multiple local binary patterns (LBP) feature extraction technique is proposed for analyzing human behavior using facial expression recognition. It consists of three-patch [TPLBP] and four-patch LBPs [FPLBP] based feature engineering respectively. Image representation is encoded from local patch statistics using these descriptors. TPLBP and FPLBP capture information that is encoded to find likenesses between adjacent patches of pixels by using short bit strings contrary to pixel-based methods. Coded images are transformed into the frequency domain using a discrete cosine transform (DCT). Most discriminant features extracted from coded DCT images are combined to generate a feature vector. Support vector machine (SVM), k-nearest neighbor (KNN), and Naïve Bayes (NB) are used for the classification of facial expressions using selected features. Extensive experimentation is performed to analyze human behavior by considering standard extended Cohn Kanade (CK+) and Oulu–CASIA datasets. Results demonstrate that the proposed methodology outperforms the other techniques used for comparison

    Feature Representation Learning with Adaptive Displacement Generation and Transformer Fusion for Micro-Expression Recognition

    Full text link
    Micro-expressions are spontaneous, rapid and subtle facial movements that can neither be forged nor suppressed. They are very important nonverbal communication clues, but are transient and of low intensity thus difficult to recognize. Recently deep learning based methods have been developed for micro-expression (ME) recognition using feature extraction and fusion techniques, however, targeted feature learning and efficient feature fusion still lack further study according to the ME characteristics. To address these issues, we propose a novel framework Feature Representation Learning with adaptive Displacement Generation and Transformer fusion (FRL-DGT), in which a convolutional Displacement Generation Module (DGM) with self-supervised learning is used to extract dynamic features from onset/apex frames targeted to the subsequent ME recognition task, and a well-designed Transformer Fusion mechanism composed of three Transformer-based fusion modules (local, global fusions based on AU regions and full-face fusion) is applied to extract the multi-level informative features after DGM for the final ME prediction. The extensive experiments with solid leave-one-subject-out (LOSO) evaluation results have demonstrated the superiority of our proposed FRL-DGT to state-of-the-art methods
    • …
    corecore