3 research outputs found

    Multi-scale Entropy and Multiclass Fisher’s Linear Discriminant for Emotion Recognition Based on Multimodal Signal

    Get PDF
    Emotion recognition using physiological signals has been a special topic frequently discussed by researchers and practitioners in the past decade. However, the use of SpO2 and Pulse rate signals for emotion recognitionisvery limited and the results still showed low accuracy. It is due to the low complexity of SpO2 and Pulse rate signals characteristics. Therefore, this study proposes a Multiscale Entropy and Multiclass Fisher’s Linear Discriminant Analysis for feature extraction and dimensional reduction of these physiological signals for improving emotion recognition accuracy in elders.  In this study, the dimensional reduction process was grouped into three experimental schemes, namely a dimensional reduction using only SpO2 signals, pulse rate signals, and multimodal signals (a combination feature vectors of SpO2 and Pulse rate signals). The three schemes were then classified into three emotion classes (happy, sad, and angry emotions) using Support Vector Machine and Linear Discriminant Analysis Methods. The results showed that Support Vector Machine with the third scheme achieved optimal performance with an accuracy score of 95.24%. This result showed a significant increase of more than 22%from the previous works

    Feature Space Augmentation: Improving Prediction Accuracy of Classical Problems in Cognitive Science and Computer Vison

    Get PDF
    The prediction accuracy in many classical problems across multiple domains has seen a rise since computational tools such as multi-layer neural nets and complex machine learning algorithms have become widely accessible to the research community. In this research, we take a step back and examine the feature space in two problems from very different domains. We show that novel augmentation to the feature space yields higher performance. Emotion Recognition in Adults from a Control Group: The objective is to quantify the emotional state of an individual at any time using data collected by wearable sensors. We define emotional state as a mixture of amusement, anger, disgust, fear, sadness, anxiety and neutral and their respective levels at any time. The generated model predicts an individual’s dominant state and generates an emotional spectrum, 1x7 vector indicating levels of each emotional state and anxiety. We present an iterative learning framework that alters the feature space uniquely to an individual’s emotion perception, and predicts the emotional state using the individual specific feature space. Hybrid Feature Space for Image Classification: The objective is to improve the accuracy of existing image recognition by leveraging text features from the images. As humans, we perceive objects using colors, dimensions, geometry and any textual information we can gather. Current image recognition algorithms rely exclusively on the first 3 and do not use the textual information. This study develops and tests an approach that trains a classifier on a hybrid text based feature space that has comparable accuracy to the state of the art CNN’s while being significantly inexpensive computationally. Moreover, when combined with CNN’S the approach yields a statistically significant boost in accuracy. Both models are validated using cross validation and holdout validation, and are evaluated against the state of the art
    corecore