4 research outputs found
Linguistic and Gender Variation in Speech Emotion Recognition using Spectral Features
This work explores the effect of gender and linguistic-based vocal variations
on the accuracy of emotive expression classification. Emotive expressions are
considered from the perspective of spectral features in speech (Mel-frequency
Cepstral Coefficient, Melspectrogram, Spectral Contrast). Emotions are
considered from the perspective of Basic Emotion Theory. A convolutional neural
network is utilised to classify emotive expressions in emotive audio datasets
in English, German, and Italian. Vocal variations for spectral features
assessed by (i) a comparative analysis identifying suitable spectral features,
(ii) the classification performance for mono, multi and cross-lingual emotive
data and (iii) an empirical evaluation of a machine learning model to assess
the effects of gender and linguistic variation on classification accuracy. The
results showed that spectral features provide a potential avenue for increasing
emotive expression classification. Additionally, the accuracy of emotive
expression classification was high within mono and cross-lingual emotive data,
but poor in multi-lingual data. Similarly, there were differences in
classification accuracy between gender populations. These results demonstrate
the importance of accounting for population differences to enable accurate
speech emotion recognition.Comment: Presented at AICS 2021 Conference - Machine Learning for Time Series
Section Published in CEUR Vol-3105 http://ceur-ws.org/Vol-3105/paper34.pdf
This publication has emanated from research supported in part by a Grant from
Science Foundation Ireland under Grant number 18/CRT/6222 Associated source
code https://github.com/ZacDair/SER_Platform_AICS 12 Pages, 5 Figure
Classification of Stress via Ambulatory ECG and GSR Data
In healthcare, detecting stress and enabling individuals to monitor their
mental health and wellbeing is challenging. Advancements in wearable technology
now enable continuous physiological data collection. This data can provide
insights into mental health and behavioural states through psychophysiological
analysis. However, automated analysis is required to provide timely results due
to the quantity of data collected. Machine learning has shown efficacy in
providing an automated classification of physiological data for health
applications in controlled laboratory environments. Ambulatory uncontrolled
environments, however, provide additional challenges requiring further
modelling to overcome. This work empirically assesses several approaches
utilising machine learning classifiers to detect stress using physiological
data recorded in an ambulatory setting with self-reported stress annotations. A
subset of the training portion SMILE dataset enables the evaluation of
approaches before submission. The optimal stress detection approach achieves
90.77% classification accuracy, 91.24 F1-Score, 90.42 Sensitivity and 91.08
Specificity, utilising an ExtraTrees classifier and feature imputation methods.
Meanwhile, accuracy on the challenge data is much lower at 59.23% (submission
#54 from BEaTS-MTU, username ZacDair). The cause of the performance disparity
is explored in this work.Comment: Associated Code to enable reproducible experimental work -
https://github.com/ZacDair/EMBC_Release SMILE dataset provided by
Computational Wellbeing Group (COMPWELL)
https://compwell.rice.edu/workshops/embc2022/dataset -
https://compwell.rice.edu
Variance in Classifying Affective State via Electrocardiogram and Photoplethysmography
Advances in wearable technology have significantly increased the sensitivity
and accuracy of devices for recording physiological signals. Commercial
off-the-shelf wearable devices can gather large quantities of physiological
data un-obtrusively. This enables momentary assessments of human physiology,
which provide valuable insights into an individual's health and psychological
state. Leveraging these insights provides significant benefits for
human-to-computer interaction and personalised healthcare. This work
contributes an analysis of variance occurring in features representative of
affective states extracted from electrocardiograms and photoplethysmography;
subsequently identifies the cardiac measures most descriptive of affective
states from both signals and provides insights into signal and emotion-specific
cardiac measures; finally baseline performance for automated affective state
detection from physiological signals is established.Comment: Associated source code https://github.com/ZacDair/Emo_Phys_Eva