4 research outputs found

    Classification of Emotional Speech of Children Using Probabilistic Neural Network

    Get PDF
    Child emotions are highly flexible and overlapping. The recognition is a difficult task when single emotion conveys multiple informations. We analyze the relevance and importance of these features and use that information to design classifier architecture. Designing of a system for recognition of children emotions with reasonable accuracy is still a challenge specifically with reduced feature set. In this paper, Probabilistic neural network (PNN) has been designed for such task of classification. PNN has faster training ability with continuous class probability density functions. It provides better classification even with reduced feature set. LP_VQC and pH vectors are used as the features for the classifier. It has been attempted to design the PNN classifier with these features. Various emotions like angry, bore, sad and happy have been considered for this piece of work. All these emotions have been collected from children in three different languages as English, Hindi, and Odia. Result shows remarkable classification accuracy for these classes of emotions. It has been verified in standard databse EMO-DB to validate the result

    Cooperative Learning and its Application to Emotion Recognition from Speech

    Get PDF
    In this paper, we propose a novel method for highly efficient exploitation of unlabeled data-Cooperative Learning. Our approach consists of combining Active Learning and Semi-Supervised Learning techniques, with the aim of reducing the costly effects of human annotation. The core underlying idea of Cooperative Learning is to share the labeling work between human and machine efficiently in such a way that instances predicted with insufficient confidence value are subject to human labeling, and those with high confidence values are machine labeled. We conducted various test runs on two emotion recognition tasks with a variable number of initial supervised training instances and two different feature sets. The results show that Cooperative Learning consistently outperforms individual Active and Semi-Supervised Learning techniques in all test cases. In particular, we show that our method based on the combination of Active Learning and Co-Training leads to the same performance of a model trained on the whole training set, but using 75% fewer labeled instances. Therefore, our method efficiently and robustly reduces the need for human annotations
    corecore