Emotion recognition using cvxEDA-based features

Abstract

Abstract The MAHNOB-HCI database provides baselines for several modalities but not all. Up to now, there are no such baselines using EDA signal for valence and arousal recognitions. Because EDA is one of the important signals in affect recognition, it is necessary to have baseline accuracy using this signal. Applying cvxEDA, EDA tool analysis based on convex optimization, to GSR signals resulted phasic, tonic, and sudomotor neuron activity (SMNA) phasic driver. There were two sets of features extracted, i.e. features from stimulated stage only and ratio of features from stimulated to relaxation stages in addition to the former set. Using kNN to solve the 3-class problem, the best accuracies under subject-dependent scenario were 74.6 ± 3.8 and 77.3 ± 3.6 for valence and arousal respectively while subject-independent scenario resulted in 75.5 ± 7.7 and 77.8 ± 8.0 for valence and arousal correspondingly. Validation using LOO gave 75.2% and 77.7% for valence and arousal respectively. cvxEDA method looked promising to extract features from EDA as the results were even better than the best results in the original database baseline. Some future works are using other feature extraction method, enhancing the accuracies by employing supervised dimensionality reduction and using other classifiers

    Similar works

    Full text

    thumbnail-image