444 research outputs found

    A pratical implementation of deep neural network for facial emotion recognition

    Get PDF
    People's emotions are rarely put into words, far more often they are expressed through other cues. The key to intuiting another's feelings is in the ability to read nonverbal channels, tone of voice, gesture, facial expression and the like. Facial expressions are used by humans to convey various types of meaning in a variety of contexts. The range of meanings extends from basic, probably innate, social-emotional concepts such as "surprise" to complex, culture-specific concepts such as "neglect". The range of contexts in which humans use facial expressions extends from responses to events in the environment to specific linguistic constructs in sign languages. In this paper, we will use an artificial neural network to classify each image into seven facial emotion classes. The model is trained on a database of FER+ images that we assume is large and diverse enough to indicate which model parameters are generally preferable. The overall results show that, the CNN model is efficient to be able to classify the images according to the state of emotions even in real time

    Automatic Speech Emotion Recognition Using Machine Learning

    Get PDF
    This chapter presents a comparative study of speech emotion recognition (SER) systems. Theoretical definition, categorization of affective state and the modalities of emotion expression are presented. To achieve this study, an SER system, based on different classifiers and different methods for features extraction, is developed. Mel-frequency cepstrum coefficients (MFCC) and modulation spectral (MS) features are extracted from the speech signals and used to train different classifiers. Feature selection (FS) was applied in order to seek for the most relevant feature subset. Several machine learning paradigms were used for the emotion classification task. A recurrent neural network (RNN) classifier is used first to classify seven emotions. Their performances are compared later to multivariate linear regression (MLR) and support vector machines (SVM) techniques, which are widely used in the field of emotion recognition for spoken audio signals. Berlin and Spanish databases are used as the experimental data set. This study shows that for Berlin database all classifiers achieve an accuracy of 83% when a speaker normalization (SN) and a feature selection are applied to the features. For Spanish database, the best accuracy (94 %) is achieved by RNN classifier without SN and with FS

    Customer’s spontaneous facial expression recognition

    Get PDF
    In the field of consumer science, customer facial expression is often categorized either as negative or positive. Customer who portrays negative emotion to a specific product mostly means they reject the product while a customer with positive emotion is more likely to purchase the product. To observe customer emotion, many researchers have studied different perspectives and methodologies to obtain high accuracy results. Conventional neural network (CNN) is used to recognize customer spontaneous facial expressions. This paper aims to recognize customer spontaneous expressions while the customer observed certain products. We have developed a customer service system using a CNN that is trained to detect three types of facial expression, i.e. happy, sad, and neutral. Facial features are extracted together with its histogram of gradient and sliding window. The results are then compared with the existing works and it shows an achievement of 82.9% success rate on average

    Customer’s Spontaneous Facial Expression Recognition

    Get PDF
    In the field of consumer science, customer facial expression is often categorized either as negative or positive. Customer who portrays negative emotion to a specific product mostly means they reject the product while a customer with positive emotion is more likely to purchase the product. To observe customer emotion, many researchers have studied different perspectives and methodologies to obtain high accuracy results. Conventional neural network (CNN) is used to recognize customer spontaneous facial expressions. This paper aims to recognize customer spontaneous expressions while the customer observed certain products. We have developed a customer service system using a CNN that is trained to detect three types of facial expression, i.e. happy, sad, and neutral. Facial features are extracted together with its histogram of gradient and sliding window. The results are then compared with the existing works and it shows an achievement of 82.9% success rate on average

    A Framework for Students Profile Detection

    Get PDF
    Some of the biggest problems tackling Higher Education Institutions are students’ drop-out and academic disengagement. Physical or psychological disabilities, social-economic or academic marginalization, and emotional and affective problems, are some of the factors that can lead to it. This problematic is worsened by the shortage of educational resources, that can bridge the communication gap between the faculty staff and the affective needs of these students. This dissertation focus in the development of a framework, capable of collecting analytic data, from an array of emotions, affects and behaviours, acquired either by human observations, like a teacher in a classroom or a psychologist, or by electronic sensors and automatic analysis software, such as eye tracking devices, emotion detection through facial expression recognition software, automatic gait and posture detection, and others. The framework establishes the guidance to compile the gathered data in an ontology, to enable the extraction of patterns outliers via machine learning, which assist the profiling of students in critical situations, like disengagement, attention deficit, drop-out, and other sociological issues. Consequently, it is possible to set real-time alerts when these profiles conditions are detected, so that appropriate experts could verify the situation and employ effective procedures. The goal is that, by providing insightful real-time cognitive data and facilitating the profiling of the students’ problems, a faster personalized response to help the student is enabled, allowing academic performance improvements

    Emotion-aware cross-modal domain adaptation in video sequences

    Get PDF

    Human Activity Recognition in a Car with Embedded Devices

    Get PDF
    Detection and prediction of drowsiness is key for the implementation of intelligent vehicles aimed to prevent highway crashes. There are several approaches for such solution.In thispaper the computer vision approach will be analysed, where embedded devices (e.g.videocameras) are used along with machine learning and pattern recognition techniques for implementing suitable solutions for detecting driver fatigue.Most of the research in computer vision systems focused on the analysis of blinks, this is a notable solution when it is combined with additional patterns like yawing or head motion for the recognition of drowsiness. The first step in this approach is the face recognition, where AdaBoost algorithm shows accurate results for the feature extraction, whereas regarding the detection of drowsiness the data-driven classifiers such as Support Vector Machine (SVM) yields remarkable results.One underlying component for implementing a computer vision technology for detection of drowsiness is a database of spontaneous images from the Facial Action Coding System (FACS), where the classifier can be trained accordingly.This paper introduces a straightforward prototype for detection of drowsiness, where the Viola-Jones method is used for face recognition and cascade classifier is used for the detection of a contiguous sequence of eyes closed, which a reconsidered as drowsiness.La detección y predicción de la somnolencia es clave para la implementación de vehículos inteligentes destinados a prevenir accidentes en carreteras. Existen varios enfoques para crear este tipo de vehículos. En este artículo se analiza el enfoque de visión por computador, donde dispositivos embebidos son usados conjuntamente con técnicas de inteligencia artificial y reconocimiento de patrones para implementar soluciones para la detección del nivel de fatiga de un conductor de un vehículo. La mayoría de investigaciones en este campo basados en visión por computador se enfocan en el análisis del parpadeo de los ojos del conductor, esta solución combinada con patrones adicionales como el reconocimiento del bostezo o el movimiento de la cabeza constituye ser una solución bastante eficiente. El primer paso en este enfoque es el reconocimiento del rostro, para lo cual el uso del algoritmo AdaBoost muestra resultados precisos en el proceso de extracción de características, mientras para la detección de somnolencia, el uso de clasificadores como el Support Vector Machine (SVM) muestra también resultados prometedores.Un componente básico en la tecnología de visión por computador es el uso de una base de datos de imágenes espontaneas acorde al Sistema Codificado de Acciones Faciales (SCAF), con la cual el clasificador puede ser entrenado. Este artículo presenta un prototipo sencillo para detección de somnolencia, en el cual el método de Viola-Jones es utilizado para el reconocimiento de rostros y un clasificador tipo cascada es usado para la detección de ojos cerrados en una secuencia continua de imágenes lo que constituye un indicador de somnolencia

    Analysis of facial expressions in children: Experiments based on the DB Child Affective Facial Expression (CAFE)

    Get PDF
    Analysis of facial expressions in children of 2 to 8 years old, and identification of emotions.Language: English
    corecore