42 research outputs found

    IMPROVING CNN FEATURES FOR FACIAL EXPRESSION RECOGNITION

    Get PDF
    Abstract Facial expression recognition is one of the challenging tasks in computervision. In this paper, we analyzed and improved the performances bothhandcrafted features and deep features extracted by Convolutional NeuralNetwork (CNN). Eigenfaces, HOG, Dense-SIFT were used as handcrafted features.Additionally, we developed features based on the distances between faciallandmarks and SIFT descriptors around the centroids of the facial landmarks,leading to a better performance than Dense-SIFT. We achieved 68.34 % accuracywith a CNN model trained from scratch. By combining CNN features withhandcrafted features, we achieved 69.54 % test accuracy.Key Word: Neural network, facial expression recognition, handcrafted feature

    Development of Facial Expression Classifier using Neural Networks

    Get PDF
    A person's emotional and mental well being, together with the age, sex, race, can be easily depicted by one's face. A crucial role is played by facial expressions in day-today social interactions. An individual's emotional level as well as behavioral manners can be interpreted by these expressions. Facial expression classifier is a evolving, demanding and curious problem in computer vision. It has its potential applications in the field of robotics, behavioral science, human computer interaction, video games etc.. It assists in building more intelligent systems which have better ability to interpret human emotions. In this paper, a facial expression classifier is proposed based on Convolution Neural Networks (CNN). CNNs are biologically-inspired variants of multi-layer preceptor (MLP) networks. They use an architecture which is particularly well suitable to classify images. Detection of facial expression can be enhanced by

    Fusing dynamic deep learned features and handcrafted features for facial expression recognition

    Get PDF
    The automated recognition of facial expressions has been actively researched due to its wide-ranging applications. The recent advances in deep learning have improved the performance facial expression recognition (FER) methods. In this paper, we propose a framework that combines discriminative features learned using convolutional neural networks and handcrafted features that include shape- and appearance-based features to further improve the robustness and accuracy of FER. In addition, texture information is extracted from facial patches to enhance the discriminative power of the extracted textures. By encoding shape, appearance, and deep dynamic information, the proposed framework provides high performance and outperforms state-of-the-art FER methods on the CK+ dataset

    An end-to-end deep neural network for facial emotion classification

    Get PDF
    Facial emotional expression is a nonverbal communication medium in human-human communication. Facial expression recognition (FER) is a significantly challenging task in computer vision. With the advent of deep neural networks, facial expression recognition has transitioned from lab-controlled settings to more neutral environments. However, deep neural networks (DNNs) suffer from overfitting the data and biases towards specific categorical distribution. The number of samples in each category is heavily imbalanced, and overall the number of samples is much less than the full number of samples representing all emotions. In this paper, we propose an end-to-end convolutional-self attention framework for classifying facial emotions. The convolutional neural network (CNN) layers can capture the spatial features in a given frame. Here we apply a convolutional-self-attention mechanism to obtain the spatiotemporal features and perform context modelling. The AffectNet database is used to validate the framework. The AffectNet database has a large number of image samples in the wild settings, which makes this database very challenging. The result shows a 30% improvement in accuracy from the CNN baseline

    A Novel Biometric Key Security System with Clustering and Convolutional Neural Network for WSN

    Get PDF
    Development in Wireless Communication technologies paves a way for the expansion of application and enhancement of security in Wireless Sensor Network using sensor nodes for communicating within the same or different clusters. In this work, a novel biometric key based security system is proposed with Optimized Convolutional Neural Network to differentiate authorized users from intruders to access network data and resources. Texture features are extracted from biometrics like Fingerprint, Retina and Facial expression to produce a biometric key, which is combined with pseudo random function for producing the secured private key for each user. Individually Adaptive Possibilistic C-Means Clustering and Kernel based Fuzzy C-Means Clustering are applied to the sensor nodes for grouping them into clusters based on the distance between the Cluster head and Cluster members. Group key obtained from fuzzy membership function of prime numbers is employed for packet transfer among groups. The three key security schemes proposed are Fingerprint Key based Security System, Retina Key based Security System, and Multibiometric Key based Security System with neural network for Wireless Sensor Networks. The results obtained from MATLAB Simulator indicates that the Multibiometric system with kernel clustering is highly secured and achieves simulation time less by 9%, energy consumption diminished by 20%, delay is reduced by 2%, Attack Detection Rate is improved by 5%, Packet Delivery Ratio increases by 6%, Packet Loss Ratio is decreased by 27%, Accuracy enhanced by 2%, and achieves 1% better precision compared to other methods

    ОБЗОР И АНАЛИЗ ПОДХОДОВ И ПРАКТИЧЕСКИХ ОБЛАСТЕЙ ПРИМЕНЕНИЯ РАСПОЗНАВАНИЯ ЭМОЦИЙ ЧЕЛОВЕКА

    Get PDF
    Human emotions are complex and multifaceted, making them difficult to quantify and analyze. However, as technology advances, researchers are exploring the artificial intelligence used to better understand and classify human emotions. In particular, neural networks are becoming increasingly popular for emotion recognition and analysis because of their ability to learn and adapt from large datasets. Objective. This study aims to review and analyze different approaches and practical applications of recognizing human emotions using neural networks. In particular, the study focuses on examining neural networks different types used for emotion recognition, data collection methods, as well as emotion recognition practical applications in various fields. The study also aims to identify limitations and issues associated with emotion recognition using neural networks. Methods. This study used a comprehensive review of relevant literature, including scholarly articles, conference proceedings, and books, to gather information on approaches and practical applications of human emotion recognition using neural networks. The review focused on recent research. The information collected was analyzed to identify the neural networks different types used for emotion recognition and data collection methods. Results. The literature review revealed several approaches to emotion recognition using neural networks, including convolutional neural networks, recurrent neural networks, and hybrid neural networks. Practical applications of emotion recognition using neural networks are found in a variety of fields, including marketing, health care, and education. The review also identified limitations and challenges associated with emotion recognition using neural networks, including dataset bias and the need for more diverse and representative datasets. Conclusion. A review and analysis of the approaches and practical applications of human emotion recognition technology using neural networks highlight the potential benefits and challenges associated with this technology. The results of this study can be used to guide future research on emotion recognition using neural networks to improve the accuracy and applicability of emotion recognition in various fields.Человеческие эмоции сложны и многогранны, что делает их сложными для количественной оценки и анализа. Однако с развитием технологий исследователи изучают возможности использования искусственного интеллекта для лучшего понимания и классификации человеческих эмоций. В частности, нейронные сети становятся все более популярными для распознавания и анализа эмоций благодаря их способности обучаться и адаптироваться на основе больших массивов данных. Цель. Целью данного исследования является обзор и анализ различных подходов и практических областей применения распознавания человеческих эмоций с помощью нейронных сетей. В частности, исследование направлено на изучение различных типов нейронных сетей, используемых для распознавания эмоций, методов сбора данных, а также практического применения распознавания эмоций в различных областях. Исследование также направлено на выявление ограничений и проблем, связанных с распознаванием эмоций с помощью нейронных сетей. Методы. В данном исследовании использовался комплексный обзор соответствующей литературы, включая научные статьи, материалы конференций и книги, для сбора информации о подходах и практических областях применения распознавания эмоций человека с помощью нейронных сетей. Обзор был сосредоточен на последних исследованиях. Собранная информация была проанализирована с целью выявления различных типов нейронных сетей, используемых для распознавания эмоций, и методов сбора данных. Результаты. Обзор литературы позволил выявить несколько подходов к распознаванию эмоций с помощью нейронных сетей, включая сверточные нейронные сети, рекуррентные нейронные сети и гибридные нейронные сети. Практическое применение распознавания эмоций с помощью нейронных сетей встречается в различных областях, включая маркетинг, здравоохранение и образование. Также показываются ограничения и проблемы, связанные с распознаванием эмоций с помощью нейронных сетей, включая предвзятость и необходимость в более разнообразных и репрезентативных наборах данных. Заключение. Обзор и анализ подходов и практических областей применения технологии распознавания эмоций человека с помощью нейронных сетей подчеркивают потенциальные преимущества и проблемы, связанные с этой технологией. Результаты данного исследования могут быть использованы для руководства будущими исследованиями в области распознавания эмоций с помощью нейронных сетей с целью повышения точности и применимости распознавания эмоций в различных областях

    Artificial Intelligence Tools for Facial Expression Analysis.

    Get PDF
    Inner emotions show visibly upon the human face and are understood as a basic guide to an individual’s inner world. It is, therefore, possible to determine a person’s attitudes and the effects of others’ behaviour on their deeper feelings through examining facial expressions. In real world applications, machines that interact with people need strong facial expression recognition. This recognition is seen to hold advantages for varied applications in affective computing, advanced human-computer interaction, security, stress and depression analysis, robotic systems, and machine learning. This thesis starts by proposing a benchmark of dynamic versus static methods for facial Action Unit (AU) detection. AU activation is a set of local individual facial muscle parts that occur in unison constituting a natural facial expression event. Detecting AUs automatically can provide explicit benefits since it considers both static and dynamic facial features. For this research, AU occurrence activation detection was conducted by extracting features (static and dynamic) of both nominal hand-crafted and deep learning representation from each static image of a video. This confirmed the superior ability of a pretrained model that leaps in performance. Next, temporal modelling was investigated to detect the underlying temporal variation phases using supervised and unsupervised methods from dynamic sequences. During these processes, the importance of stacking dynamic on top of static was discovered in encoding deep features for learning temporal information when combining the spatial and temporal schemes simultaneously. Also, this study found that fusing both temporal and temporal features will give more long term temporal pattern information. Moreover, we hypothesised that using an unsupervised method would enable the leaching of invariant information from dynamic textures. Recently, fresh cutting-edge developments have been created by approaches based on Generative Adversarial Networks (GANs). In the second section of this thesis, we propose a model based on the adoption of an unsupervised DCGAN for the facial features’ extraction and classification to achieve the following: the creation of facial expression images under different arbitrary poses (frontal, multi-view, and in the wild), and the recognition of emotion categories and AUs, in an attempt to resolve the problem of recognising the static seven classes of emotion in the wild. Thorough experimentation with the proposed cross-database performance demonstrates that this approach can improve the generalization results. Additionally, we showed that the features learnt by the DCGAN process are poorly suited to encoding facial expressions when observed under multiple views, or when trained from a limited number of positive examples. Finally, this research focuses on disentangling identity from expression for facial expression recognition. A novel technique was implemented for emotion recognition from a single monocular image. A large-scale dataset (Face vid) was created from facial image videos which were rich in variations and distribution of facial dynamics, appearance, identities, expressions, and 3D poses. This dataset was used to train a DCNN (ResNet) to regress the expression parameters from a 3D Morphable Model jointly with a back-end classifier

    Facial expression recognition and intensity estimation.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.Facial Expression is one of the profound non-verbal channels through which human emotion state is inferred from the deformation or movement of face components when facial muscles are activated. Facial Expression Recognition (FER) is one of the relevant research fields in Computer Vision (CV) and Human-Computer Interraction (HCI). Its application is not limited to: robotics, game, medical, education, security and marketing. FER consists of a wealth of information. Categorising the information into primary emotion states only limit its performance. This thesis considers investigating an approach that simultaneously predicts the emotional state of facial expression images and the corresponding degree of intensity. The task also extends to resolving FER ambiguous nature and annotation inconsistencies with a label distribution learning method that considers correlation among data. We first proposed a multi-label approach for FER and its intensity estimation using advanced machine learning techniques. According to our findings, this approach has not been considered for emotion and intensity estimation in the field before. The approach used problem transformation to present FER as a multilabel task, such that every facial expression image has unique emotion information alongside the corresponding degree of intensity at which the emotion is displayed. A Convolutional Neural Network (CNN) with a sigmoid function at the final layer is the classifier for the model. The model termed ML-CNN (Multilabel Convolutional Neural Network) successfully achieve concurrent prediction of emotion and intensity estimation. ML-CNN prediction is challenged with overfitting and intraclass and interclass variations. We employ Visual Geometric Graphics-16 (VGG-16) pretrained network to resolve the overfitting challenge and the aggregation of island loss and binary cross-entropy loss to minimise the effect of intraclass and interclass variations. The enhanced ML-CNN model shows promising results and outstanding performance than other standard multilabel algorithms. Finally, we approach data annotation inconsistency and ambiguity in FER data using isomap manifold learning with Graph Convolutional Networks (GCN). The GCN uses the distance along the isomap manifold as the edge weight, which appropriately models the similarity between adjacent nodes for emotion predictions. The proposed method produces a promising result in comparison with the state-of-the-art methods.Author's List of Publication is on page xi of this thesis
    corecore