121 research outputs found

    Affective analysis of customer service calls

    Get PDF
    This paper presents an affective and acoustic-prosodic analysis of a call-center corpus (700 phone calls with corresponding customer satisfaction levels). Our main goal is to understand how customers’ satisfaction correlates to the acoustic-prosodic and affective information (emotions and personality traits) of the interactions. A subset of 30 calls was manually annotated with emotions (frustrated vs.neutral) and personality traits (Big-Five model). Results on automatic satisfaction prediction from acoustic-prosodic features show a number of very informative linguistic knowledge-based features, especially pitch and energy ranges. The affective analysis also provides encouraging results, relating low/high satisfaction levels with the presence/absence of customer frustration. Concerning personality, customers tend to express signs of anxiety and nervousness, while agents are generally perceived as extroverted and open.info:eu-repo/semantics/publishedVersio

    Determination of Formant Features in Czech and Slovak for GMM Emotional Speech Classifier

    Get PDF
    The paper is aimed at determination of formant features (FF) which describe vocal tract characteristics. It comprises analysis of the first three formant positions together with their bandwidths and the formant tilts. Subsequently, the statistical evaluation and comparison of the FF was performed. This experiment was realized with the speech material in the form of sentences of male and female speakers expressing four emotional states (joy, sadness, anger, and a neutral state) in Czech and Slovak languages. The statistical distribution of the analyzed formant frequencies and formant tilts shows good differentiation between neutral and emotional styles for both voices. Contrary to it, the values of the formant 3-dB bandwidths have no correlation with the type of the speaking style or the type of the voice. These spectral parameters together with the values of the other speech characteristics were used in the feature vector for Gaussian mixture models (GMM) emotional speech style classifier that is currently developed. The overall mean classification error rate achieves about 18 %, and the best obtained error rate is 5 % for the sadness style of the female voice. These values are acceptable in this first stage of development of the GMM classifier that should be used for evaluation of the synthetic speech quality after applied voice conversion and emotional speech style transformation

    A Review on Speech Emotion Recognition

    Get PDF
    Emotion recognition from Audio signal Recognition is a recent research topic in the Human Computer Interaction. The demand has risen for increasing communication interface between humans and digital media. Many researchers are working in order to improve their accuracy. But still there is lack of complete system which can recognize emotions from speech. In order to make the human and digital machine interaction more natural, the computer should able to recognize emotional states in the same way as human. The efficiency of emotion recognition system depends on type of features extracted and classifier used for detection of emotions. There are some fundamental emotions such as: Happy, Angry, Sad, Depressed, Bored, Anxiety, Fear and Nervous. These signals were preprocessed and analyzed using various techniques. In feature extraction various parameters used to form a feature vector are: fundamental frequency, pitch contour, formants, duration (pause length ratio) etc. These features are further classified into different emotions. This research work is the study of speech emotion classification addressing three important aspects of the design of a speech emotion recognition system. The first one is the choice of suitable features for speech representation. The second issue is the design of an appropriate classification scheme and the third issue is the proper preparation of an emotional speech database for evaluating system performanc

    PEMO: A New Validated Dataset for Punjabi Speech Emotion Detection

    Get PDF
    This research work presents a new valid dataset for Punjabi called the Punjabi Emotional Speech Database (PEMO) which has been developed to assess the ability to recognize emotions in speech by both computers and humans. The PEMO includes speech samples from about 60 speakers with an age range between 20 and 45 years, for four fundamental emotions, including anger, sad, happy and neutral. In order to create the data, Punjabi films are retrieved from different multimedia websites such as YouTube. The movies are processed and transformed into utterances with software called PRAAT. The database contains 22,000 natural utterances. This is equivalent to 12 hours and 35 min of speech information taken from online Punjabi movies and web series. Three annotators categorize the emotional content of the utterances. The common label that is labelled by all annotators becomes the final label for the utterance. The annotators have a thorough knowledge of Punjabi Language. The data is used to determine the expression of emotions in speech in the Punjabi Language

    Speech emotion recognition based on SVM and KNN classifications fusion

    Get PDF
    Recognizing the sense of speech is one of the most active research topics in speech processing and in human-computer interaction programs. Despite a wide range of studies in this scope, there is still a long gap among the natural feelings of humans and the perception of the computer. In general, a sensory recognition system from speech can be divided into three main sections: attribute extraction, feature selection, and classification. In this paper, features of fundamental frequency (FEZ) (F0), energy (E), zero-crossing rate (ZCR), fourier parameter (FP), and various combinations of them are extracted from the data vector, Then, the principal component analysis (PCA) algorithm is used to reduce the number of features. To evaluate the system performance. The fusion of each emotional state will be performed later using support vector machine (SVM), K-nearest neighbor (KNN), In terms of comparison, similar experiments have been performed on the emotional speech of the German language, English language, and significant results were obtained by these comparisons
    corecore