93,153 research outputs found

    Effectiveness of Using Artificial Intelligence for Early Child Development Screening

    Get PDF
    This study presents a novel approach to recognizing emotions in infants using machine learning models. To address the lack of infant-specific datasets, a custom dataset of infants' faces was created by extracting images from the AffectNet dataset. The dataset was then used to train various machine learning models with different parameters. The best-performing model was evaluated on the City Infant Faces dataset. The proposed deep learning model achieved an accuracy of 94.63% in recognizing positive, negative, and neutral facial expressions. These results provide a benchmark for the performance of machine learning models in infant emotion recognition and suggest potential applications in developing emotion-sensitive technologies for infants. This study fills a gap in the literature on emotion recognition, which has largely focused on adults or children and highlights the importance of developing infant-specific datasets and evaluating different parameters to achieve accurate results

    Emotion Recognition using AutoEncoders and Convolutional Neural Networks

    Get PDF
    Emotions demonstrate people's reactions to certain stimuli. Facial expression analysis is often used to identify the emotion expressed. Machine learning algorithms combined with artificial intelligence techniques have been developed in order to detect expressions found in multimedia elements, including videos and pictures. Advanced methods to achieve this include the usage of Deep Learning algorithms. The aim of this paper is to analyze the performance of a Convolutional Neural Network which uses AutoEncoder Units for emotion-recognition in human faces. The combination of two Deep Learning techniques boosts the performance of the classification system. 8000 facial expressions from the Radboud Faces Database were used during this research for both training and testing. The outcome showed that five of the eight analyzed emotions presented higher accuracy rates, higher than 90%

    Emotion Recognition Based on Deep Learning with Autoencoder

    Get PDF
    Facial expression is one way of expressing emotions. Face emotion recognition is one of the important and major fields of research in the field of computer vision. Face emotion recognition is still one of the unique and challenging areas of research because it can be combined with various methods, one of which is deep learning. Deep learning is popular in the research area because it has the advantage of processing large amounts of data and automatically learning features on raw data, such as face emotion. Deep learning consists of several methods, one of which is the convolutional neural network method that will be used in this study. This study also uses the convolutional auto-encoder (CAE) method to explore the advantages that can arise compared to previous studies. CAE has advantages for image reconstruction and image de-noising, but we will explore CAE to do classification with CNN. Input data will be processed using CAE, then proceed with the classification process using CNN. Face emotion recognition model will use the Karolinska Directed Emotional Faces (KDEF) dataset of 4900 images divided into 2 groups, 80% for training and 20% for testing. The KDEF data consists of 7 emotional models with 5 angles from 70 different people. The test results showed an accuracy of 81.77%

    EmoNets: Multimodal deep learning approaches for emotion recognition in video

    Full text link
    The task of the emotion recognition in the wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based "bag-of-mouths" model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67% on the 2014 dataset

    Using Photorealistic Face Synthesis and Domain Adaptation to Improve Facial Expression Analysis

    Full text link
    Cross-domain synthesizing realistic faces to learn deep models has attracted increasing attention for facial expression analysis as it helps to improve the performance of expression recognition accuracy despite having small number of real training images. However, learning from synthetic face images can be problematic due to the distribution discrepancy between low-quality synthetic images and real face images and may not achieve the desired performance when the learned model applies to real world scenarios. To this end, we propose a new attribute guided face image synthesis to perform a translation between multiple image domains using a single model. In addition, we adopt the proposed model to learn from synthetic faces by matching the feature distributions between different domains while preserving each domain's characteristics. We evaluate the effectiveness of the proposed approach on several face datasets on generating realistic face images. We demonstrate that the expression recognition performance can be enhanced by benefiting from our face synthesis model. Moreover, we also conduct experiments on a near-infrared dataset containing facial expression videos of drivers to assess the performance using in-the-wild data for driver emotion recognition.Comment: 8 pages, 8 figures, 5 tables, accepted by FG 2019. arXiv admin note: substantial text overlap with arXiv:1905.0028

    A hybrid deep learning neural approach for emotion recognition from facial expressions for socially assistive robots

    Get PDF
    © 2018, The Natural Computing Applications Forum. We have recently seen significant advancements in the development of robotic machines that are designed to assist people with their daily lives. Socially assistive robots are now able to perform a number of tasks autonomously and without human supervision. However, if these robots are to be accepted by human users, there is a need to focus on the form of human–robot interaction that is seen as acceptable by such users. In this paper, we extend our previous work, originally presented in Ruiz-Garcia et al. (in: Engineering applications of neural networks: 17th international conference, EANN 2016, Aberdeen, UK, September 2–5, 2016, proceedings, pp 79–93, 2016. https://doi.org/10.1007/978-3-319-44188-7_6), to provide emotion recognition from human facial expressions for application on a real-time robot. We expand on previous work by presenting a new hybrid deep learning emotion recognition model and preliminary results using this model on real-time emotion recognition performed by our humanoid robot. The hybrid emotion recognition model combines a Deep Convolutional Neural Network (CNN) for self-learnt feature extraction and a Support Vector Machine (SVM) for emotion classification. Compared to more complex approaches that use more layers in the convolutional model, this hybrid deep learning model produces state-of-the-art classification rate of 96.26 % , when tested on the Karolinska Directed Emotional Faces dataset (Lundqvist et al. in The Karolinska Directed Emotional Faces—KDEF, 1998), and offers similar performance on unseen data when tested on the Extended Cohn–Kanade dataset (Lucey et al. in: Proceedings of the third international workshop on CVPR for human communicative behaviour analysis (CVPR4HB 2010), San Francisco, USA, pp 94–101, 2010). This architecture also takes advantage of batch normalisation (Ioffe and Szegedy in Batch normalization: accelerating deep network training by reducing internal covariate shift. http://arxiv.org/abs/1502.03167, 2015) for fast learning from a smaller number of training samples. A comparison between Gabor filters and CNN for feature extraction, and between SVM and multilayer perceptron for classification is also provided

    Non-acted multi-view audio-visual dyadic Interactions. Project master thesis: multi-modal local and recurrent non-verbal emotion recognition in dyadic scenarios

    Get PDF
    Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona, Any: 2019, Tutor: Sergio Escalera Guerrero i Cristina Palmero[en] In particular, this master thesis is focused on the development of baseline emotion recognition system in a dyadic environment using raw and handcraft audio features and cropped faces from the videos. This system is analyzed at frame and utterance level with and without temporal information. For this reason, an exhaustive study of the state-of-the-art on emotion recognition techniques has been conducted, paying particular attention on Deep Learning techniques for emotion recognition. While studying the state-of-the-art from the theoretical point of view, a dataset consisting of videos of sessions of dyadic interactions between individuals in different scenarios has been recorded. Different attributes were captured and labelled from these videos: body pose, hand pose, emotion, age, gender, etc. Once the architectures for emotion recognition have been trained with other dataset, a proof of concept is done with this new database in order to extract conclusions. In addition, this database can help future systems to achieve better results. A large number of experiments with audio and video are performed to create the emotion recognition system. The IEMOCAP database is used to perform the training and evaluation experiments of the emotion recognition system. Once the audio and video are trained separately with two different architectures, a fusion of both methods is done. In this work, the importance of preprocessing data (i.e. face detection, windows analysis length, handcrafted features, etc.) and choosing the correct parameters for the architectures (i.e. network depth, fusion, etc.) has been demonstrated and studied, while some experiments to study the influence of the temporal information are performed using some recurrent models for the spatiotemporal utterance level recognition of emotion. Finally, the conclusions drawn throughout this work are exposed, as well as the possible lines of future work including new systems for emotion recognition and the experiments with the database recorded in this work
    corecore