9 research outputs found

    Classification of Emotional Speech of Children Using Probabilistic Neural Network

    Get PDF
    Child emotions are highly flexible and overlapping. The recognition is a difficult task when single emotion conveys multiple informations. We analyze the relevance and importance of these features and use that information to design classifier architecture. Designing of a system for recognition of children emotions with reasonable accuracy is still a challenge specifically with reduced feature set. In this paper, Probabilistic neural network (PNN) has been designed for such task of classification. PNN has faster training ability with continuous class probability density functions. It provides better classification even with reduced feature set. LP_VQC and pH vectors are used as the features for the classifier. It has been attempted to design the PNN classifier with these features. Various emotions like angry, bore, sad and happy have been considered for this piece of work. All these emotions have been collected from children in three different languages as English, Hindi, and Odia. Result shows remarkable classification accuracy for these classes of emotions. It has been verified in standard databse EMO-DB to validate the result

    An Overview of Limited Literature on Diagnosis and Treatment of Deaf and Hard of Hearing Individuals With Pediatric Bipolar Disorder

    Get PDF
    The current literature indicates children and adolescents with bipolar disorder and severe mood dysregulation find it more difficult to make decisions, recognize facial display of emotions, etc. (McClure, et. al, 2005; Rich, Grimley, Schmajuk, Blair, Blair, & Leibenluft, 2008; Kim, et. al, 2013). While treatment with this population is unclear (Miklowitz, et. al, 2013; Miklowitz, et. al, 2014), there is even less literature pertaining to treatment with d/Deaf and hard of hearing (DHH) children who have bipolar disorder. An additional challenge for DHH individuals with bipolar disorder is access to treatment (McClure, et. al, 2005; Rich, Grimley, Schmajuk, Blair, Blair, & Leibenluft, 2008; Kim, et. al, 2013). However, there is limited data suggesting that there are ways of delivering services that may best met the needs of this diverse and underserved population (Waxmonsky, et. al, 2013). Effective, evidence-based treatment regarding DHH children and adolescents with bipolar disorder needs further investigation. In recognizing the need for social-emotional support of DHH children with bipolar disorder, the following review of literature highlights the need for informed evaluation, diagnosis, and treatment of DHH individuals with pediatric bipolar disorder

    Finger Vein Recognition Using Principle Component Analysis and Adaptive k-Nearest Centroid Neighbor Classifier

    Get PDF
    The k-nearest centroid neighbor kNCN classifier is one of the non-parametric classifiers which provide a powerful decision based on the geometrical surrounding neighborhood. Essentially, the main challenge in the kNCN is due to slow classification time that utilizing all training samples to find each nearest centroid neighbor. In this work, an adaptive k-nearest centroid neighbor (akNCN) is proposed as an improvement to the kNCN classifier. Two new rules are introduced to adaptively select the neighborhood size of the test sample. The neighborhood size for the test sample is changed through the following ways: 1) The neighborhood size, k will be adapted to j if the centroid distance of j-th nearest centroid neighbor is greater than the predefined boundary. 2) There is no need to look for further nearest centroid neighbors if the maximum number of samples of the same class is found among jth nearest centroid neighbor. Thus, the size of neighborhood is adaptively changed to j. Experimental results on theFinger Vein USM (FV-USM) image database demonstrate the promising results in which the classification time of the akNCN classifier is significantly reduced to 51.56% in comparison to the closest competitors, kNCN and limited-kNCN. It also outperforms its competitors by achieving the best reduction ratio of 12.92% whilemaintaining the classification accuracy

    2D respiratory sound analysis to detect lung abnormalities

    Get PDF
    In this paper, we analyze deep visual features from 2D data representation(s) of the respiratory sound to detect evidence of lung abnormalities. The primary motivation behind this is that visual cues are more important in decision-making than raw data (lung sound). Early detection and prompt treatments are essential for any future possible respiratory disorders, and respiratory sound is proven to be one of the biomarkers. In contrast to state-of-the-art approaches, we aim at understanding/analyzing visual features using our Convolutional Neural Networks (CNN) tailored Deep Learning Models, where we consider all possible 2D data such as Spectrogram, Mel-frequency Cepstral Coefficients (MFCC), spectral centroid, and spectral roll-off. In our experiments, using the publicly available respiratory sound database named ICBHI 2017 (5.5 hours of recordings containing 6898 respiratory cycles from 126 subjects), we received the highest performance with the area under the curve of 0.79 from Spectrogram as opposed to 0.48 AUC from the raw data from a pre-trained deep learning model: VGG16. We also used machine learning algorithms using reliable data to improve Our study proved that 2D data representation could help better understand/analyze lung abnormalities as compared to 1D data. Our findings are also contrasted with those of earlier studies. For purposes of generality, we used the MFCC of neutrinos to determine if picture data or raw data produced superior results

    c

    Get PDF
    In this article, we describe and interpret a set of acoustic and linguistic features that characterise emotional/emotion-related user states – confined to the one database processed: four classes in a German corpus of children interacting with a pet robot. To this end, we collected a very large feature vector consisting of more than 4000 features extracted at different sites. We performed extensive feature selection (Sequential Forward Floating Search) for seven acoustic and four linguistic types of features, ending up in a small number of ‘most important ’ features which we try to interpret by discussing the impact of different feature and extraction types. We establish different measures of impact and discuss the mutual influence of acoustics and linguistics

    Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge

    Get PDF
    More than a decade has passed since research on automatic recognition of emotion from speech has become a new field of research in line with its 'big brothers' speech and speaker recognition. This article attempts to provide a short overview on where we are today, how we got there and what this can reveal us on where to go next and how we could arrive there. In a first part, we address the basic phenomenon reflecting the last fifteen years, commenting on databases, modelling and annotation, the unit of analysis and prototypicality. We then shift to automatic processing including discussions on features, classification, robustness, evaluation, and implementation and system integration. From there we go to the first comparative challenge on emotion recognition from speech-the INTERSPEECH 2009 Emotion Challenge, organised by (part of) the authors, including the description of the Challenge's database, Sub-Challenges, participants and their approaches, the winners, and the fusion of results to the actual learnt lessons before we finally address the ever-lasting problems and future promising attempts. (C) 2011 Elsevier B.V. All rights reserved.Schuller B., Batliner A., Steidl S., Seppi D., ''Recognising realistic emotions and affect in speech: state of the art and lessons learnt from the first challenge'', Speech communication, vol. 53, no. 9-10, pp. 1062-1087, November 2011.status: publishe
    corecore