31 research outputs found

    Analysis on techniques used to recognize and identifying the Human emotions

    Get PDF
    Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress of research in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify the proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on the various recognition techniques used to identify the complexity in recognizing the facial expression is presented. This work will also help researchers and scholars to ease out the problem in choosing the techniques used in the identification of the facial expression domain

    The Effect of Using Histogram Equalization and Discrete Cosine Transform on Facial Keypoint Detection

    Get PDF
    This study aims to figure out the effect of using Histogram Equalization and Discrete Cosine Transform (DCT) in detecting facial keypoints, which can be applied for 3D facial reconstruction in face recognition. Four combinations of methods comprising of Histogram Equalization, removing low-frequency coefficients using Discrete Cosine Transform (DCT) and using five feature detectors, namely: SURF, Minimum Eigenvalue, Harris-Stephens, FAST, and BRISK were used for test. Data that were used for test were obtained from Head Pose Image and ORL Databases. The result from the test were evaluated using F-score. The highest F-score for Head Pose Image Dataset is 0.140 and achieved through the combination of DCT & Histogram Equalization with feature detector SURF. The highest F-score for ORL Database is 0.33 and achieved through the combination of DCT & Histogram Equalization with feature detector BRISK

    Local feature extraction based facial emotion recognition: a survey

    Get PDF
    Notwithstanding the recent technological advancement, the identification of facial and emotional expressions is still one of the greatest challenges scientists have ever faced. Generally, the human face is identified as a composition made up of textures arranged in micro-patterns. Currently, there has been a tremendous increase in the use of local binary pattern based texture algorithms which have invariably been identified to being essential in the completion of a variety of tasks and in the extraction of essential attributes from an image. Over the years, lots of LBP variants have been literally reviewed. However, what is left is a thorough and comprehensive analysis of their independent performance. This research work aims at filling this gap by performing a large-scale performance evaluation of 46 recent state-of-the-art LBP variants for facial expression recognition. Extensive experimental results on the well-known challenging and benchmark KDEF, JAFFE, CK and MUG databases taken under different facial expression conditions, indicate that a number of evaluated state-of-the-art LBP-like methods achieve promising results, which are better or competitive than several recent state-of-the-art facial recognition systems. Recognition rates of 100%, 98.57%, 95.92% and 100% have been reached for CK, JAFFE, KDEF and MUG databases, respectively

    Moving towards in object recognition with deep learning for autonomous driving applications

    Get PDF
    Object recognition and pedestrian detection are of crucial importance to autonomous driving applications. Deep learning based methods have exhibited very large improvements in accuracy and fast decision in real time applications thanks to CUDA support. In this paper, we propose two Convolutions Neural Networks (CNNs) architectures with different layers. We extract the features obtained from the proposed CNN, CNN in AlexNet architecture, and Bag of visual Words (BOW) approach by using SURF, HOG and k-means. We use linear SVM classifiers for training the features. In the experiments, we carried out object recognition and pedestrian detection tasks using the benchmark the Caltech 101 and the Caltech Pedestrian Detection datasets

    Spontaneous Subtle Expression Detection and Recognition based on Facial Strain

    Full text link
    Optical strain is an extension of optical flow that is capable of quantifying subtle changes on faces and representing the minute facial motion intensities at the pixel level. This is computationally essential for the relatively new field of spontaneous micro-expression, where subtle expressions can be technically challenging to pinpoint. In this paper, we present a novel method for detecting and recognizing micro-expressions by utilizing facial optical strain magnitudes to construct optical strain features and optical strain weighted features. The two sets of features are then concatenated to form the resultant feature histogram. Experiments were performed on the CASME II and SMIC databases. We demonstrate on both databases, the usefulness of optical strain information and more importantly, that our best approaches are able to outperform the original baseline results for both detection and recognition tasks. A comparison of the proposed method with other existing spatio-temporal feature extraction approaches is also presented.Comment: 21 pages (including references), single column format, accepted to Signal Processing: Image Communication journa

    Moving Learning Machine Towards Fast Real-Time Applications: A High-Speed FPGA-based Implementation of the OS-ELM Training Algorithm

    Get PDF
    Currently, there are some emerging online learning applications handling data streams in real-time. The On-line Sequential Extreme Learning Machine (OS-ELM) has been successfully used in real-time condition prediction applications because of its good generalization performance at an extreme learning speed, but the number of trainings by a second (training frequency) achieved in these continuous learning applications has to be further reduced. This paper proposes a performance-optimized implementation of the OS-ELM training algorithm when it is applied to real-time applications. In this case, the natural way of feeding the training of the neural network is one-by-one, i.e., training the neural network for each new incoming training input vector. Applying this restriction, the computational needs are drastically reduced. An FPGA-based implementation of the tailored OS-ELMalgorithm is used to analyze, in a parameterized way, the level of optimization achieved. We observed that the tailored algorithm drastically reduces the number of clock cycles consumed for the training execution up to approximately the 1%. This performance enables high-speed sequential training ratios, such as 14 KHz of sequential training frequency for a 40 hidden neurons SLFN, or 180 Hz of sequential training frequency for a 500 hidden neurons SLFN. In practice, the proposed implementation computes the training almost 100 times faster, or more, than other applications in the bibliography. Besides, clock cycles follows a quadratic complexity O(N 2), with N the number of hidden neurons, and are poorly influenced by the number of input neurons. However, it shows a pronounced sensitivity to data type precision even facing small-size problems, which force to use double floating-point precision data types to avoid finite precision arithmetic effects. In addition, it has been found that distributed memory is the limiting resource and, thus, it can be stated that current FPGA devices can support OS-ELM-based on-chip learning of up to 500 hidden neurons. Concluding, the proposed hardware implementation of the OS-ELM offers great possibilities for on-chip learning in portable systems and real-time applications where frequent and fast training is required

    PENDEKATAN PENGENALAN EMOSI MANUSIA MENGGUNAKAN EXTREME LEARNING MACHINE

    Get PDF
    Pengenalan emosi manusia telah menjadi permasalahan yang cukup mendapat perhatian dalam bidang interaksi manusia dan komputer. Dengan tujuan untuk membentuk sebuah interaksi yang lebih natural diantara manusia dan komputer; komputer haruslah dapat membedakan dan merespons emosi manusia. Dalam penelitian ini, sebuah pendekatan untuk mengenali emosi manusia diusulkan. Pendekatan menggunakan HAAR-classifier untuk mendeteksi mata, alis mata dan mulut pada wajah, dan, untuk mengekstrak fitur dari atribut wajah tersebut, metode yang diusulkan menggunakan Gabor wavelet. Sebelum mengklasifikasikan fitur, reduksi dimensi fitur menggunakan PCA dilakukan. Pendekatan yang diusulkan menggunakan SLFNs dengan Extreme Learning Machine (ELM) untuk mengklasifikasi fitur. Dalam eksperimen ini, pendekatan diuji dalam dua kasus, personalisasi dan generalisasi wajah, dengan sepuluh subjek mengekspresikan enam emosi dasar dan kondisi netral. Performa ELM dievaluasi dengan membandingkan hasil pengujian dengan performa k-NN dan Support Vector Machine (SVM). Pada kasus generalisasi wajah, ELM mencapai performa 93.36%, k-NN mencapai performa 87.20%, dan SVM mencapai 85.8%. Sedangkan pada kasus personalisasi wajah, ELM mencapai performa 30.41%, kNN mencapai 25.41%, dan SVM mencapai 14.06%. *Human emotion recognition has been challenging issue in field of humancomputer interaction. In order to form an interaction that is more natural between human and computer, the computer should be able to discern and respond to human emotion. In this paper, an approach for recognizing human emotion is proposed. The proposed approach operates HAAR-classifier to detect mouth, eyes, and eyebrow on face. To extract features from them, it uses Gabor wavelet. Before classifying the features, PCA is performed to reduce its dimension. The proposed approach employs SLFNs with ELM as its learning algorithm to classify the features. In this experimental, the proposed approach is tested in two cases, personalize and generalize face case, with ten subjects expressing six basic emotions and neural state. The robustness of ELM is evaluated with comparing it to k-NN and SVM. In generalize face case, the recognition rate of ELM reaches 93.36%, k-NN obtains recognition rate 87.20%, and the performance of SVM reaches 85.8%. While in personalize face case, the ELM obtains 30.41%, the performance of k-NN reaches 25.41, and the recognition rate of SVM reaches 14.06%

    SMEConvNet: A Convolutional Neural Network for Spotting Spontaneous Facial Micro-Expression from Long Videos

    Get PDF
    Micro-expression is a subtle and involuntary facial expression that may reveal the hidden emotion of human beings. Spotting micro-expression means to locate the moment when the microexpression happens, which is a primary step for micro-expression recognition. Previous work in microexpression expression spotting focus on spotting micro-expression from short video, and with hand-crafted features. In this paper, we present a methodology for spotting micro-expression from long videos. Specifically, a new convolutional neural network named as SMEConvNet (Spotting Micro-Expression Convolutional Network) was designed for extracting features from video clips, which is the first time that deep learning is used in micro-expression spotting. Then a feature matrix processing method was proposed for spotting the apex frame from long video, which uses a sliding window and takes the characteristics of micro-expression into account to search the apex frame. Experimental results demonstrate that the proposed method can achieve better performance than existing state-of-art methods
    corecore