4 research outputs found

    User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition in Federated Learning

    Full text link
    Many existing privacy-enhanced speech emotion recognition (SER) frameworks focus on perturbing the original speech data through adversarial training within a centralized machine learning setup. However, this privacy protection scheme can fail since the adversary can still access the perturbed data. In recent years, distributed learning algorithms, especially federated learning (FL), have gained popularity to protect privacy in machine learning applications. While FL provides good intuition to safeguard privacy by keeping the data on local devices, prior work has shown that privacy attacks, such as attribute inference attacks, are achievable for SER systems trained using FL. In this work, we propose to evaluate the user-level differential privacy (UDP) in mitigating the privacy leaks of the SER system in FL. UDP provides theoretical privacy guarantees with privacy parameters ϵ\epsilon and δ\delta. Our results show that the UDP can effectively decrease attribute information leakage while keeping the utility of the SER system with the adversary accessing one model update. However, the efficacy of the UDP suffers when the FL system leaks more model updates to the adversary. We make the code publicly available to reproduce the results in https://github.com/usc-sail/fed-ser-leakage

    Emotion recognition in public speaking scenarios utilising an LSTM-RNN approach with attention

    Get PDF

    Analysis of learners' emotions in e-learning environments based on cognitive sciences

    Get PDF
    The present study aimed to examine students’ emotions in e-learning classes through facial expressions and investigate the influence of different instructional methods on students’ emotional responses. In this study, we examined the facial expressions of 17 undergraduate students using three different methods of presenting educational content (PowerPoint, video, and Kahoot) in online classes and analyzed the data with face reader software. The findings demonstrated that students experienced various positive and negative emotions with different methods of content delivery. Furthermore, comparing the three methods revealed that the Kahoot method elicited the highest average of positive emotions among students compared to the other two methods. This difference can be attributed to the visual attractiveness and interactive nature of the Kahoot environment. Additionally, this study highlights that simply incorporating multimedia materials, such as PowerPoint presentations and videos, is not sufficient to enhance effectiveness and cultivate positive emotions in e-learning. While multimedia materials serve as supportive tools and enhance visualization, interaction at various levels (content, teacher, peers, etc.) is necessary. Nevertheless, the significance of this research lies in the innovative application of a tool for analyzing emotions in online learning classrooms, thereby enhancing the measurement of genuine and objective emotional responses in e-learning environments

    On the Recognition of Emotion from Physiological Data

    Get PDF
    This work encompasses several objectives, but is primarily concerned with an experiment where 33 participants were shown 32 slides in order to create ‗weakly induced emotions‘. Recordings of the participants‘ physiological state were taken as well as a self report of their emotional state. We then used an assortment of classifiers to predict emotional state from the recorded physiological signals, a process known as Physiological Pattern Recognition (PPR). We investigated techniques for recording, processing and extracting features from six different physiological signals: Electrocardiogram (ECG), Blood Volume Pulse (BVP), Galvanic Skin Response (GSR), Electromyography (EMG), for the corrugator muscle, skin temperature for the finger and respiratory rate. Improvements to the state of PPR emotion detection were made by allowing for 9 different weakly induced emotional states to be detected at nearly 65% accuracy. This is an improvement in the number of states readily detectable. The work presents many investigations into numerical feature extraction from physiological signals and has a chapter dedicated to collating and trialing facial electromyography techniques. There is also a hardware device we created to collect participant self reported emotional states which showed several improvements to experimental procedure
    corecore