110 research outputs found

    Automated Voice Pathology Discrimination from Continuous Speech Benefits from Analysis by Phonetic Context

    Get PDF
    In contrast to previous studies that look only at discriminating pathological voice from the normal voice, in this study we focus on the discrimination between cases of spasmodic dysphonia (SD) and vocal fold palsy (VP) using automated analysis of speech recordings. The hypothesis is that discrimination will be enhanced by studying continuous speech, since the different pathologies are likely to have different effects in different phonetic contexts. We collected audio recordings of isolated vowels and of a read passage from 60 patients diagnosed with SD (N=38) or VP (N=22). Baseline classifiers on features extracted from the recordings taken as a whole gave a cross-validated unweighted average recall of up to 75% for discriminating the two pathologies. We used an automated method to divide the read passage into phone-labelled regions and built classifiers for each phone. Results show that the discriminability of the pathologies varied with phonetic context as predicted. Since different phone contexts provide different information about the pathologies, classification is improved by fusing phone predictions, to achieve a classification accuracy of 83%. The work has implications for the differential diagnosis of voice pathologies and contributes to a better understanding of their impact on speech

    Automated voice pathology discrimination from audio recordings benefits from phonetic analysis of continuous speech

    Get PDF
    In this paper we evaluate the hypothesis that automated methods for diagnosis of voice disorders from speech recordings would benefit from contextual information found in continuous speech. Rather than basing a diagnosis on how disorders affect the average acoustic properties of the speech signal, the idea is to exploit the possibility that different disorders will cause different acoustic changes within different phonetic contexts. Any differences in the pattern of effects across contexts would then provide additional information for discrimination of pathologies. We evaluate this approach using two complementary studies: the first uses a short phrase which is automatically annotated using a phonetic transcription, the second uses a long reading passage which is automatically annotated from text. The first study uses a single sentence recorded from 597 speakers in the Saarbrucken Voice Database to discriminate structural from neurogenic disorders. The results show that discrimination performance for these broad pathology classes improves from 59% to 67% unweighted average recall when classifiers are trained for each phone-label and the results fused. Although the phonetic contexts improved discrimination, the overall sensitivity and specificity of the method seems insufficient for clinical application. We hypothesise that this is because of the limited contexts in the speech audio and the heterogeneous nature of the disorders. In the second study we address these issues by processing recordings of a long reading passage obtained from clinical recordings of 60 speakers with either Spasmodic Dysphonia or Vocal fold Paralysis. We show that discrimination performance increases from 80% to 87% unweighted average recall if classifiers are trained for each phone-labelled region and predictions fused. We also show that the sensitivity and specificity of a diagnostic test with this performance is similar to other diagnostic procedures in clinical use. In conclusion, the studies confirm that the exploitation of contextual differences in the way disorders affect speech improves automated diagnostic performance, and that automated methods for phonetic annotation of reading passages are robust enough to extract useful diagnostic information

    Identification of voice pathologies in an elderly population

    Get PDF
    Ageing is associated with an increased risk of developing diseases, including a greater pre- disposition to develop diseases such as Sepsis. Also, with ageing, human voices undergo a natural degradation gauged by alterations in hoarseness, breathiness, articulatory ability, and speaking rate. Nowadays, perceptual evaluation is widely used to assess speech and voice impairments despite its high subjectivity. This dissertation proposes a new method for detecting and identifying voice patholo- gies by exploring acoustic parameters of continuous speech signals in the elderly popula- tion. Additionally, a study of the influence of gender and age on voice pathology detection systems’ performance is conducted. The study included 44 subjects older than 60 years old, with the pathologies Dyspho- nia, Functional Dysphonia, and Spasmodic Dysphonia. In the dataset originated with these settings, two gender-dependent subsets were created, one with only female samples and the other with only male samples. The system developed used three feature selection methods and five Machine Learning algorithms to classify the voice signal according to the presence of pathology. The binary classification, which consisted of voice pathology detection, reached an accuracy of 85,1%±5,1% for the dataset without gender division, 83,7%±7,0% for the male dataset, and 87,4%±4,2% for the female dataset. As for the multiclass classifica- tion, which consisted of the classification of different pathologies, reached an accuracy of 69,0%±5,1% for the dataset without gender division, 63,7%± 5,4% for the male dataset, and 80,6%±8,1% for the female dataset. The obtained results revealed that features that describe fluency are important and discriminating in these types of systems. Also, Random Forest has shown to be the most effective Machine Learning algorithm for both binary and multiclass classification. The proposed model proves to be promising in detecting pathological voices and identifying the underlying pathology in an elderly population, with an increase in its performance when a gender division is performed.O envelhecimento está associado a um maior risco de desenvolvimento de doenças, nome- adamente a uma maior predisposição para a evolução de doenças como a Sepsis. Inclusiva- mente, com o envelhecimento, a voz sofre uma degradação natural aferindo-se alterações na rouquidão, respiração, capacidade articulatória e no ritmo do discurso. Atualmente, a avaliação percetual é amplamente utilizada para avaliar as perturbações da fala e da voz, possuindo elevada subjetividade. Esta dissertação propõe um novo método de deteção e identificação de patologias da voz através da exploração de parâmetros acústicos de sinais de fala contínua na população idosa. Adicionalmente, é realizado um estudo da influência do género e da idade no desempenho dos sistemas de detecção de patologias da voz. A amostra deste estudo é composta por 44 indivíduos com idades superiores a 60 anos referentes às patologias Disfonia, Disfonia Funcional e Disfonia Espasmódica. No conjunto de dados originados com esta configuração, foram criados dois subconjuntos de- pendentes do género: um com apenas amostras femininas e o outro com apenas amostras masculinas. O sistema desenvolvido utilizou três métodos de seleção de atributos e cinco algoritmos de Aprendizagem Automática de modo a classificar o sinal de voz de acordo com a presença de patologias da voz. A deteção de patologia de voz alcançou uma exatidão de 85,1%±5,1% para os da- dos sem divisão de género, 83,7%±7,0% para os dados masculinos, e 87,4%±4,2% para os dados femininos. A classificação de diferentes patologias alcançou uma exatidão de 69,0%±5,1% para os dados sem divisão de género, 63,7%±5,4% para os dados masculinos, e 80,6%±8,1% para os dados femininos. Os resultados obtidos revelaram que os atributos que caracterizam a fluência são importantes e discriminatórios nestes tipos de sistemas. Ademais, o classificador Random Forest demonstrou ser o algoritmo mais eficaz na deteção e identificação de patologias da voz. O modelo proposto revelou-se promissor na deteção de vozes patológicas e identifi- cação da patologia subjacente numa população idosa, aumentando o seu desempenho quando ocorre uma divisão de género

    Characterization of Healthy and Pathological Voice Through Measures Based on Nonlinear Dynamics

    Get PDF
    In this paper, we propose to quantify the quality of the recorded voice through objective nonlinear measures. Quantification of speech signal quality has been traditionally carried out with linear techniques since the classical model of voice production is a linear approximation. Nevertheless, nonlinear behaviors in the voice production process have been shown. This paper studies the usefulness of six nonlinear chaotic measures based on nonlinear dynamics theory in the discrimination between two levels of voice quality: healthy and pathological. The studied measures are first- and second-order Renyi entropies, the correlation entropy and the correlation dimension. These measures were obtained from the speech signal in the phase-space domain. The values of the first minimum of mutual information function and Shannon entropy were also studied. Two databases were used to assess the usefulness of the measures: a multiquality database composed of four levels of voice quality (healthy voice and three levels of pathological voice); and a commercial database (MEEI Voice Disorders) composed of two levels of voice quality (healthy and pathological voices). A classifier based on standard neural networks was implemented in order to evaluate the measures proposed. Global success rates of 82.47% (multiquality database) and 99.69% (commercial database) were obtained.Publicad

    A survey on perceived speaker traits: personality, likability, pathology, and the first challenge

    Get PDF
    The INTERSPEECH 2012 Speaker Trait Challenge aimed at a unified test-bed for perceived speaker traits – the first challenge of this kind: personality in the five OCEAN personality dimensions, likability of speakers, and intelligibility of pathologic speakers. In the present article, we give a brief overview of the state-of-the-art in these three fields of research and describe the three sub-challenges in terms of the challenge conditions, the baseline results provided by the organisers, and a new openSMILE feature set, which has been used for computing the baselines and which has been provided to the participants. Furthermore, we summarise the approaches and the results presented by the participants to show the various techniques that are currently applied to solve these classification tasks

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Infant Cry Signal Processing, Analysis, and Classification with Artificial Neural Networks

    Get PDF
    As a special type of speech and environmental sound, infant cry has been a growing research area covering infant cry reason classification, pathological infant cry identification, and infant cry detection in the past two decades. In this dissertation, we build a new dataset, explore new feature extraction methods, and propose novel classification approaches, to improve the infant cry classification accuracy and identify diseases by learning infant cry signals. We propose a method through generating weighted prosodic features combined with acoustic features for a deep learning model to improve the performance of asphyxiated infant cry identification. The combined feature matrix captures the diversity of variations within infant cries and the result outperforms all other related studies on asphyxiated baby crying classification. We propose a non-invasive fast method of using infant cry signals with convolutional neural network (CNN) based age classification to diagnose the abnormality of infant vocal tract development as early as 4-month age. Experiments discover the pattern and tendency of the vocal tract changes and predict the abnormality of infant vocal tract by classifying the cry signals into younger age category. We propose an approach of generating hybrid feature set and using prior knowledge in a multi-stage CNNs model for robust infant sound classification. The dominant and auxiliary features within the set are beneficial to enlarge the coverage as well as keeping a good resolution for modeling the diversity of variations within infant sound and the experimental results give encouraging improvements on two relative databases. We propose an approach of graph convolutional network (GCN) with transfer learning for robust infant cry reason classification. Non-fully connected graphs based on the similarities among the relevant nodes are built to consider the short-term and long-term effects of infant cry signals related to inner-class and inter-class messages. With as limited as 20% of labeled training data, our model outperforms that of the CNN model with 80% labeled training data in both supervised and semi-supervised settings. Lastly, we apply mel-spectrogram decomposition to infant cry classification and propose a fusion method to further improve the infant cry classification performance

    Models and analysis of vocal emissions for biomedical applications: 5th International Workshop: December 13-15, 2007, Firenze, Italy

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies. The Workshop has the sponsorship of: Ente Cassa Risparmio di Firenze, COST Action 2103, Biomedical Signal Processing and Control Journal (Elsevier Eds.), IEEE Biomedical Engineering Soc. Special Issues of International Journals have been, and will be, published, collecting selected papers from the conference

    Supervised cnn strategies for optical image segmentation and classification in interventional medicine

    Get PDF
    The analysis of interventional images is a topic of high interest for the medical-image analysis community. Such an analysis may provide interventional-medicine professionals with both decision support and context awareness, with the final goal of improving patient safety. The aim of this chapter is to give an overview of some of the most recent approaches (up to 2018) in the field, with a focus on Convolutional Neural Networks (CNNs) for both segmentation and classification tasks. For each approach, summary tables are presented reporting the used dataset, involved anatomical region and achieved performance. Benefits and disadvantages of each approach are highlighted and discussed. Available datasets for algorithm training and testing and commonly used performance metrics are summarized to offer a source of information for researchers that are approaching the field of interventional-image analysis. The advancements in deep learning for medical-image analysis are involving more and more the interventional-medicine field. However, these advancements are undeniably slower than in other fields (e.g. preoperative-image analysis) and considerable work still needs to be done in order to provide clinicians with all possible support during interventional-medicine procedures
    corecore