12 research outputs found

    Using Deep Convolutional Neural Network for Emotion Detection on a Physiological Signals Dataset (AMIGOS)

    Get PDF
    Recommender systems have been based on context and content, and now the technological challenge of making personalized recommendations based on the user emotional state arises through physiological signals that are obtained from devices or sensors. This paper applies the deep learning approach using a deep convolutional neural network on a dataset of physiological signals (electrocardiogram and galvanic skin response), in this case, the AMIGOS dataset. The detection of emotions is done by correlating these physiological signals with the data of arousal and valence of this dataset, to classify the affective state of a person. In addition, an application for emotion recognition based on classic machine learning algorithms is proposed to extract the features of physiological signals in the domain of time, frequency, and non-linear. This application uses a convolutional neural network for the automatic feature extraction of the physiological signals, and through fully connected network layers, the emotion prediction is made. The experimental results on the AMIGOS dataset show that the method proposed in this paper achieves a better precision of the classification of the emotional states, in comparison with the originally obtained by the authors of this dataset.This research project is financed by theGovernment of Colombia, Colciencias and the Governorateof Boyac

    Emotion Recognition with Machine Learning Using EEG Signals

    Full text link
    In this research, an emotion recognition system is developed based on valence/arousal model using electroencephalography (EEG) signals. EEG signals are decomposed into the gamma, beta, alpha and theta frequency bands using discrete wavelet transform (DWT), and spectral features are extracted from each frequency band. Principle component analysis (PCA) is applied to the extracted features by preserving the same dimensionality, as a transform, to make the features mutually uncorrelated. Support vector machine (SVM), K-nearest neighbor (KNN) and artificial neural network (ANN) are used to classify emotional states. The cross-validated SVM with radial basis function (RBF) kernel using extracted features of 10 EEG channels, performs with 91.3% accuracy for arousal and 91.1% accuracy for valence, both in the beta frequency band. Our approach shows better performance compared to existing algorithms applied to the "DEAP" dataset

    Emoticon Generation, Expression Recognition, and Gender Classification Using Deep Learning in Real-Time

    Get PDF
    Images play an increasingly important role in identifying a person's gender and emotional state in today's digital environment, but there are still methodological hurdles to overcome. Image processing utilizing deep learning algorithms is the way to go. Our study's overarching goal is to find ways to bridge the communication gap through the use of emoticons based on the emotions conveyed in photographs and snapshots. We have utilized the Keras framework to implement a deep learning algorithm called a Convolutional Neural Network (CNN) and evaluated it using Tensor Flow to predict gender. The goal is to create a new dataset of pictures free of noise and then utilize those images as inputs to a convolutional neural network (CNN). The algorithm's result is supposed to be more trustworthy gender identification based on increased accuracy. We have implemented an LSTM-RNN (Long short-term memory recurrent neural network) for emotion identification and facial expression detection. Feature selection is the most crucial step since it will ultimately aid in emoticon generation

    Classification of Physiological Signals for Emotion Recognition using IoT

    Get PDF
    Emotion recognition gains huge popularity now a days. Physiological signals provides an appropriate way to detect human emotion with the help of IoT. In this paper, a novel system is proposed which is capable of determining the emotional status using physiological parameters, including design specification and software implementation of the system. This system may have a vivid use in medicine (especially for emotionally challenged people), smart home etc. Various Physiological parameters to be measured includes, heart rate (HR), galvanic skin response (GSR), skin temperature etc. To construct the proposed system the measured physiological parameters were feed to the neural networks which further classify the data in various emotional states, mainly in anger, happy, sad, joy. This work recognized the correlation between human emotions and change in physiological parameters with respect to their emotion

    Sentiment analysis in non-fixed length audios using a Fully Convolutional Neural Network

    Get PDF
    .In this work, a sentiment analysis method that is capable of accepting audio of any length, without being fixed a priori, is proposed. Mel spectrogram and Mel Frequency Cepstral Coefficients are used as audio description methods and a Fully Convolutional Neural Network architecture is proposed as a classifier. The results have been validated using three well known datasets: EMODB, RAVDESS and TESS. The results obtained were promising, outperforming the state-of–the-art methods. Also, thanks to the fact that the proposed method admits audios of any size, it allows a sentiment analysis to be made in near real time, which is very interesting for a wide range of fields such as call centers, medical consultations or financial brokers.S

    Sentiment analysis in non-fixed length audios using a Fully Convolutional Neural Network

    Full text link
    In this work, a sentiment analysis method that is capable of accepting audio of any length, without being fixed a priori, is proposed. Mel spectrogram and Mel Frequency Cepstral Coefficients are used as audio description methods and a Fully Convolutional Neural Network architecture is proposed as a classifier. The results have been validated using three well known datasets: EMODB, RAVDESS, and TESS. The results obtained were promising, outperforming the state-of-the-art methods. Also, thanks to the fact that the proposed method admits audios of any size, it allows a sentiment analysis to be made in near real time, which is very interesting for a wide range of fields such as call centers, medical consultations, or financial brokers

    A user-centered approach for detecting emotions with low-cost sensors

    Get PDF
    AbstractDetecting emotions is very useful in many fields, from health-care to human-computer interaction. In this paper, we propose an iterative user-centered methodology for supporting the development of an emotion detection system based on low-cost sensors. Artificial Intelligence techniques have been adopted for emotion classification. Different kind of Machine Learning classifiers have been experimentally trained on the users' biometrics data, such as hearth rate, movement and audio. The system has been developed in two iterations and, at the end of each of them, the performance of classifiers (MLP, CNN, LSTM, Bidirectional-LSTM and Decision Tree) has been compared. After the experiment, the SAM questionnaire is proposed to evaluate the user's affective state when using the system. In the first experiment we gathered data from 47 participants, in the second one an improved version of the system has been trained and validated by 107 people. The emotional analysis conducted at the end of each iteration suggests that reducing the device invasiveness may affect the user perceptions and also improve the classification performance

    CorrNet: Fine-grained emotion recognition for video watching using wearable physiological sensors

    Get PDF
    Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≀64 Hz) (3) large amounts of neu-tral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance
    corecore