1,298 research outputs found

    Audio Features Affected by Music Expressiveness

    Full text link
    Within a Music Information Retrieval perspective, the goal of the study presented here is to investigate the impact on sound features of the musician's affective intention, namely when trying to intentionally convey emotional contents via expressiveness. A preliminary experiment has been performed involving 1010 tuba players. The recordings have been analysed by extracting a variety of features, which have been subsequently evaluated by combining both classic and machine learning statistical techniques. Results are reported and discussed.Comment: Submitted to ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2016), Pisa, Italy, July 17-21, 201

    Affective Man-Machine Interface: Unveiling human emotions through biosignals

    Get PDF
    As is known for centuries, humans exhibit an electrical profile. This profile is altered through various psychological and physiological processes, which can be measured through biosignals; e.g., electromyography (EMG) and electrodermal activity (EDA). These biosignals can reveal our emotions and, as such, can serve as an advanced man-machine interface (MMI) for empathic consumer products. However, such a MMI requires the correct classification of biosignals to emotion classes. This chapter starts with an introduction on biosignals for emotion detection. Next, a state-of-the-art review is presented on automatic emotion classification. Moreover, guidelines are presented for affective MMI. Subsequently, a research is presented that explores the use of EDA and three facial EMG signals to determine neutral, positive, negative, and mixed emotions, using recordings of 21 people. A range of techniques is tested, which resulted in a generic framework for automated emotion classification with up to 61.31% correct classification of the four emotion classes, without the need of personal profiles. Among various other directives for future research, the results emphasize the need for parallel processing of multiple biosignals

    Speech emotion classification using SVM and MLP on prosodic and voice quality features

    Get PDF
    In this paper, a comparison of emotion classification undertaken by the Support Vector Machine (SVM) and the Multi-Layer Perceptron (MLP) Neural Network, using prosodic and voice quality features extracted from the Berlin Emotional Database, is reported. The features were extracted using PRAAT tools, while the WEKA tool was used for classification. Different parameters were set up for both SVM and MLP, which are used to obtain an optimized emotion classification. The results show that MLP overcomes SVM in overall emotion classification performance. Nevertheless, the training for SVM was much faster when compared to MLP. The overall accuracy was 76.82% for SVM and 78.69% for MLP. Sadness was the emotion most recognized by MLP, with accuracy of 89.0%, while anger was the emotion most recognized by SVM, with accuracy of 87.4%. The most confusing emotions using MLP classification were happiness and fear, while for SVM, the most confusing emotions were disgust and fear

    Automatic Recognition of Emotional States From Human Speeches

    Get PDF

    Speech emotion recognition based on SVM and KNN classifications fusion

    Get PDF
    Recognizing the sense of speech is one of the most active research topics in speech processing and in human-computer interaction programs. Despite a wide range of studies in this scope, there is still a long gap among the natural feelings of humans and the perception of the computer. In general, a sensory recognition system from speech can be divided into three main sections: attribute extraction, feature selection, and classification. In this paper, features of fundamental frequency (FEZ) (F0), energy (E), zero-crossing rate (ZCR), fourier parameter (FP), and various combinations of them are extracted from the data vector, Then, the principal component analysis (PCA) algorithm is used to reduce the number of features. To evaluate the system performance. The fusion of each emotional state will be performed later using support vector machine (SVM), K-nearest neighbor (KNN), In terms of comparison, similar experiments have been performed on the emotional speech of the German language, English language, and significant results were obtained by these comparisons

    Recognition of Emotions using Energy Based Bimodal Information Fusion and Correlation

    Get PDF
    Multi-sensor information fusion is a rapidly developing research area which forms the backbone of numerous essential technologies such as intelligent robotic control, sensor networks, video and image processing and many more. In this paper, we have developed a novel technique to analyze and correlate human emotions expressed in voice tone & facial expression. Audio and video streams captured to populate audio and video bimodal data sets to sense the expressed emotions in voice tone and facial expression respectively. An energy based mapping is being done to overcome the inherent heterogeneity of the recorded bi-modal signal. The fusion process uses sampled and mapped energy signal of both modalitiesā€™s data stream and further recognize the overall emotional component using Support Vector Machine (SVM) classifier with the accuracy 93.06%
    • ā€¦
    corecore