19,684 research outputs found

    Neural Network-Based Expression Recognition System for Static Facial Images

    Get PDF
    Affective Computing is a field of studying the human effect to interpret, recognize, process, and simulate in computer science, psychology, and cognitive science. Humans express their emotions in a variety of ways such as body gesture, word, vocal, and mainly facial expression. Non-verbal behavior is a significant component of communication, and facial expressions of emotions are the most important complex signal. Facial Expression Recognition (FER) is an interesting and challenging task in artificial intelligence. FER system in the study three steps including preprocessing, feature extraction and expression classification. In the paper, comparative analysis of expression recognition is implemented based on Neural Network (NN) with three feature extraction methods of Sobel Edge, Histogram of Oriented Gradient and Local Binary Pattern. NN-based expression recognition system achieves an accuracy of 95.82% and 97.68% for JAFFE and CK+ dataset respectively. The result has shown that the Edge features are the effected features for recognizing human expression using still images

    Artificial Intelligence Tools for Facial Expression Analysis.

    Get PDF
    Inner emotions show visibly upon the human face and are understood as a basic guide to an individual’s inner world. It is, therefore, possible to determine a person’s attitudes and the effects of others’ behaviour on their deeper feelings through examining facial expressions. In real world applications, machines that interact with people need strong facial expression recognition. This recognition is seen to hold advantages for varied applications in affective computing, advanced human-computer interaction, security, stress and depression analysis, robotic systems, and machine learning. This thesis starts by proposing a benchmark of dynamic versus static methods for facial Action Unit (AU) detection. AU activation is a set of local individual facial muscle parts that occur in unison constituting a natural facial expression event. Detecting AUs automatically can provide explicit benefits since it considers both static and dynamic facial features. For this research, AU occurrence activation detection was conducted by extracting features (static and dynamic) of both nominal hand-crafted and deep learning representation from each static image of a video. This confirmed the superior ability of a pretrained model that leaps in performance. Next, temporal modelling was investigated to detect the underlying temporal variation phases using supervised and unsupervised methods from dynamic sequences. During these processes, the importance of stacking dynamic on top of static was discovered in encoding deep features for learning temporal information when combining the spatial and temporal schemes simultaneously. Also, this study found that fusing both temporal and temporal features will give more long term temporal pattern information. Moreover, we hypothesised that using an unsupervised method would enable the leaching of invariant information from dynamic textures. Recently, fresh cutting-edge developments have been created by approaches based on Generative Adversarial Networks (GANs). In the second section of this thesis, we propose a model based on the adoption of an unsupervised DCGAN for the facial features’ extraction and classification to achieve the following: the creation of facial expression images under different arbitrary poses (frontal, multi-view, and in the wild), and the recognition of emotion categories and AUs, in an attempt to resolve the problem of recognising the static seven classes of emotion in the wild. Thorough experimentation with the proposed cross-database performance demonstrates that this approach can improve the generalization results. Additionally, we showed that the features learnt by the DCGAN process are poorly suited to encoding facial expressions when observed under multiple views, or when trained from a limited number of positive examples. Finally, this research focuses on disentangling identity from expression for facial expression recognition. A novel technique was implemented for emotion recognition from a single monocular image. A large-scale dataset (Face vid) was created from facial image videos which were rich in variations and distribution of facial dynamics, appearance, identities, expressions, and 3D poses. This dataset was used to train a DCNN (ResNet) to regress the expression parameters from a 3D Morphable Model jointly with a back-end classifier

    Automatic emotional state detection using facial expression dynamic in videos

    Get PDF
    In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems

    Spontaneous Subtle Expression Detection and Recognition based on Facial Strain

    Full text link
    Optical strain is an extension of optical flow that is capable of quantifying subtle changes on faces and representing the minute facial motion intensities at the pixel level. This is computationally essential for the relatively new field of spontaneous micro-expression, where subtle expressions can be technically challenging to pinpoint. In this paper, we present a novel method for detecting and recognizing micro-expressions by utilizing facial optical strain magnitudes to construct optical strain features and optical strain weighted features. The two sets of features are then concatenated to form the resultant feature histogram. Experiments were performed on the CASME II and SMIC databases. We demonstrate on both databases, the usefulness of optical strain information and more importantly, that our best approaches are able to outperform the original baseline results for both detection and recognition tasks. A comparison of the proposed method with other existing spatio-temporal feature extraction approaches is also presented.Comment: 21 pages (including references), single column format, accepted to Signal Processing: Image Communication journa

    FMX (EEPIS FACIAL EXPRESSION MECHANISM EXPERIMENT): PENGENALAN EKSPRESI WAJAH MENGGUNAKAN NEURAL NETWORK BACKPROPAGATION

    Get PDF
    In the near future, it is expected that the robot can interact with humans. Communication itself has many varieties. Not only from word to word, but body language also be the medium. One of them is using facial expressions. Facial expression in human communication is always used to show human emotions. Whether it is happy, sad, angry, shocked, disappointed, or even relaxed? This final project focused on how to make robots that only consist of head, so it could make a variety facial expression like human beings. This Face Humanoid Robot divided into several subsystems. There are image processing subsystem, hardware subsystem and subsystem of controllers. In image processing subsystem, webcam is used for image data acquisition processed by a computer. This process needs Microsoft Visual C compiler for programming that has been installed with the functions of the Open Source Computer Vision Library (OpenCV). Image processing subsystem is used for recognizing human facial expressions. With image processing, it can be seen the pattern of an object. Backpropagation Neural Network is useful to recognize the object pattern. Subsystem hardware is a Humanoid Robot Face. Subsystem controller is a single microcontroller ATMega128 and a camera that can capture images at a distance of 50 to 120 cm. The process of running the robot is as follows. Images captured by a camera webcam. From the images that have been processed with image processing by a computer, human facial expression is obtained. Data results are sent to the subsystem controller via serial communications. Microcontroller subsystem hardware then ordered to make that facial expression. Result of this final project is all of the subsystems can be integrated to make the robot that can respond the form of human expression. The method used is simple but looks quite capable of recognizing human facial expression. Keyword: OpenCV, Neural Network BackPropagation, Humanoid Robo
    • …
    corecore