369 research outputs found

    An Analysis of Facial Expression Recognition Techniques

    Get PDF
    In present era of technology , we need applications which could be easy to use and are user-friendly , that even people with specific disabilities use them easily. Facial Expression Recognition has vital role and challenges in communities of computer vision, pattern recognition which provide much more attention due to potential application in many areas such as human machine interaction, surveillance , robotics , driver safety, non- verbal communication, entertainment, health- care and psychology study. Facial Expression Recognition has major importance ration in face recognition for significant image applications understanding and analysis. There are many algorithms have been implemented on different static (uniform background, identical poses, similar illuminations ) and dynamic (position variation, partial occlusion orientation, varying lighting )conditions. In general way face expression recognition consist of three main steps first is face detection then feature Extraction and at last classification. In this survey paper we discussed different types of facial expression recognition techniques and various methods which is used by them and their performance measures

    Time-Efficient Hybrid Approach for Facial Expression Recognition

    Get PDF
    Facial expression recognition is an emerging research area for improving human and computer interaction. This research plays a significant role in the field of social communication, commercial enterprise, law enforcement, and other computer interactions. In this paper, we propose a time-efficient hybrid design for facial expression recognition, combining image pre-processing steps and different Convolutional Neural Network (CNN) structures providing better accuracy and greatly improved training time. We are predicting seven basic emotions of human faces: sadness, happiness, disgust, anger, fear, surprise and neutral. The model performs well regarding challenging facial expression recognition where the emotion expressed could be one of several due to their quite similar facial characteristics such as anger, disgust, and sadness. The experiment to test the model was conducted across multiple databases and different facial orientations, and to the best of our knowledge, the model provided an accuracy of about 89.58% for KDEF dataset, 100% accuracy for JAFFE dataset and 71.975% accuracy for combined (KDEF + JAFFE + SFEW) dataset across these different scenarios. Performance evaluation was done by cross-validation techniques to avoid bias towards a specific set of images from a database

    Wavelet based approach for facial expression recognition

    Get PDF
    Facial expression recognition is one of the most active fields of research. Many facial expression recognition methods have been developed and implemented. Neural networks (NNs) have capability to undertake such pattern recognition tasks. The key factor of the use of NN is based on its characteristics. It is capable in conducting learning and generalizing, non-linear mapping, and parallel computation. Backpropagation neural networks (BPNNs) are the approach methods that mostly used. In this study, BPNNs were used as classifier to categorize facial expression images into seven-class of expressions which are anger, disgust, fear, happiness, sadness, neutral and surprise. For the purpose of feature extraction tasks, three discrete wavelet transforms were used to decompose images, namely Haar wavelet, Daubechies (4) wavelet and Coiflet (1) wavelet. To analyze the proposed method, a facial expression recognition system was built. The proposed method was tested on static images from JAFFE database

    FACE, GENDER AND RACE CLASSIFICATION USING MULTI-REGULARIZED FEATURES LEARNING

    Get PDF
    This paper investigates a new approach for face, gender and race classification, called multi-regularized learning (MRL). This approach combines ideas from the recently proposed algorithms called multi-stage learning (MSL) and multi-task features learning (MTFL). In our approach, we first reduce the dimensionality of the training faces using PCA. Next, for a given a test (probe) face, we use MRL to exploit the relationships among multiple shared stages generated by changing the regularization parameter. Our approach results in convex optimization problem that controls the trade-off between the fidelity to the data (training) and the smoothness of the solution (probe). Our MRL algorithm is compared against different state-of-the-art methods on face recognition (FR), gender classification (GC) and race classification (RC) based on different experimental protocols with AR, LFW, FEI, Lab2 and Indian databases. Results show that our algorithm performs very competitively

    感性推定のためのDeep Learning による特徴抽出

    Get PDF
    広島大学(Hiroshima University)博士(工学)Doctor of Engineeringdoctora

    Combining local descriptors and classification methods for human emotion recognition.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.Human Emotion Recognition occupies a very important place in artificial intelligence and has several applications, such as emotionally intelligent robots, driver fatigue monitoring, mood prediction, and many others. Facial Expression Recognition (FER) systems can recognize human emotions by extracting face image features and classifying them as one of several prototypic emotions. Local descriptors are good at encoding micro-patterns and capturing their distribution in a sub-region of an image. Moreover, dividing the face into sub-regions introduces information about micro-pattern locations, essential for developing robust facial expression features. Hence, local descriptors’ efficiencies depend heavily on parameters such as the sub-region size and histogram length. However, the extraction parameters are seldom optimized in existing approaches. This dissertation reviews several local descriptors and classifiers, and experiments are conducted to improve the robustness and accuracy of existing FER methods. A study of the Histogram of Oriented Gradients (HOG) descriptor inspires this research to propose a new face registration algorithm. The approach uses contrast-limited histogram equalization to enhance the image, followed by binary thresholding and blob detection operations to rotate the face upright. Additionally, this research proposes a new method for optimized FER. The main idea behind the approach is to optimize the calculation of feature vectors by varying the extraction parameter values, producing several feature sets. The best extraction parameter values are selected by evaluating the classification performances of each feature set. The proposed approach is also implemented using different combinations of local descriptors and classification methods under the same experimental conditions. The results reveal that the proposed methods produced a better performance than what was reported in previous studies. Furthermore, the results showed an improvement of up to 2% compared with the performance achieved in previous works. The results showed that HOG was the most effective local descriptor, while Support Vector Machines (SVM) and Multi-Layer Perceptron (MLP) were the best classifiers. Hence, the best combinations were HOG+SVM and HOG+MLP

    NEURAL NETWORK CORRELATION BASED SIMILARITY EVALUATION WITH ZERNIKE MOMENTS FOR THE POSE-INVARIANT FACE RECOGNITION

    Get PDF
    Human face recognition is best application in pattern recognition for identification and recognition. Development of face recognition system is increasing day by day in market and research organizations. Different parameters and methods are used for face recognition. In this research project, we will discuss about the different algorithms used for face recognition that are Zernike Moments (ZMs) and correlation classification (CC) etc and compare these algorithms with proposed algorithm Z_CC (Zernike with Correlation Classification).The angular information or rotation of the face is calculated by using the Zernike moments (ZM) to obtain the degree or radian of face rotation from the frontal view. The robust combination of angle-invariant and scale-invariant features with the combination of Zernike moments and correlation classification has been proposed with the neural network classification. The experiments will be performed on the variety of datasets. The multi-object dataset has been combined by collection the samples with faces rotated in the training samples. Z_NN (Zernike with neural network) algorithm provide best recognition rate for human face recognition 90%. In this algorithm we use Zernike Moments and correlation for global feature extraction and after that these features are compared by using neural network
    corecore