14 research outputs found

    Voice pathologies : the most comum features and classification tools

    Get PDF
    Speech pathologies are quite common in society, however the exams that exist are invasive, making them uncomfortable for patients and depending on the experience of the clinician who performs the assessment. Hence the need to develop non-invasive methods, which allow objective and efficient analysis. Taking this need into account in this work, the most promising list of features and classifiers was identified. As features, jitter, shimmer, HNR, LPC, PLP, and MFCC were identified and as classifiers CNN, RNN and LSTM. This study intends to develop a device to support medical decision, however this article already presents the system interface.info:eu-repo/semantics/publishedVersio

    Classification of three pathological voices based on specific features groups using support vector machine

    Get PDF
    Determining and classifying pathological human sounds are still an interesting area of research in the field of speech processing. This paper explores different methods of voice features extraction, namely: Mel frequency cepstral coefficients (MFCCs), zero-crossing rate (ZCR) and discrete wavelet transform (DWT). A comparison is made between these methods in order to identify their ability in classifying any input sound as a normal or pathological voices using support vector machine (SVM). Firstly, the voice signal is processed and filtered, then vocal features are extracted using the proposed methods and finally six groups of features are used to classify the voice data as healthy, hyperkinetic dysphonia, hypokinetic dysphonia, or reflux laryngitis using separate classification processes. The classification results reach 100% accuracy using the MFCC and kurtosis feature group. While the other classification accuracies range between~60% to~97%. The Wavelet features provide very good classification results in comparison with other common voice features like MFCC and ZCR features. This paper aims to improve the diagnosis of voice disorders without the need for surgical interventions and endoscopic procedures which consumes time and burden the patients. Also, the comparison between the proposed feature extraction methods offers a good reference for further researches in the voice classification area

    Adaptation of Speaker and Speech Recognition Methods for the Automatic Screening of Speech Disorders using Machine Learning

    Get PDF
    This PhD thesis presented methods for exploiting the non-verbal communication of individuals suffering from specific diseases or health conditions aiming to reach an automatic screening of them. More specifically, we employed one of the pillars of non-verbal communication, paralanguage, to explore techniques that could be utilized to model the speech of subjects. Paralanguage is a non-lexical component of communication that relies on intonation, pitch, speed of talking, and others, which can be processed and analyzed in an automatic manner. This is called Computational Paralinguistics, which can be defined as the study of modeling non-verbal latent patterns within the speech of a speaker by means of computational algorithms; these patterns go beyond the linguistic} approach. By means of machine learning, we present models from distinct scenarios of both paralinguistics and pathological speech which are capable of estimating the health status of a given disease such as Alzheimer's, Parkinson's, and clinical depression, among others, in an automatic manner

    Detec??o de patologias lar?ngeas por meio da an?lise de sinais de voz utilizando Deep Neural Networks

    Get PDF
    A fala ? o principal mecanismo natural de comunica??o entre seres humanos.O sistema de forma??o e transmiss?o natural da voz, principal elemento da fala, ? comprometido pelo surgimento de patologias lar?ngeas. Esta pesquisa trata da aplica??o de classificadores baseados em redes neurais profundas (Deep Neural Networks - DNNs) na discrimina??o entre sinais de vozes saud?veis e de vozes afetadas pelas patologias lar?ngeas organofuncionais edema de Reinke, carcinoma, leocoplasia, p?lipos e a paralisia das pregas vocais, de origem neurol?gica. A metodologia proposta ? baseada na an?lise do comportamento din?mico do sinal de voz avaliado, dispensando medidas ou aplica??es de t?cnicas comumente usadas na extra??o de caracter?sticas. Foi investigado o uso de DNNs com 04,05 e 06 camadas com 200 neur?nios ocultos ativados pela fun??o unidade linear retificada (Rectified LinearUnit - ReLU),um neur?nio na camada de sa?da,ativado pela fun??o sigmoide e uma camada de entrada que recebe os 400 dados que comp?e cada segmento extra?do do sinal de voz avaliado. No total, 07 algoritmos de aprendizagem, utilizando como fun??o custo a entropia cruzada bin?ria (Binary Cross-entropy), foram avaliados individualmente para o treinamento de cada DNN. Os sinais de voz utilizados nesta pesquisa foram extra?dos da base de dados Saarbruecken Voice Database (SVD), desenvolvida na Alemanha. Da base, foram selecionados 640 sinais de voz da vogal sustentada /a/, sendo 320 sinais de vozes saud?veis e 320 afetados por patologias lar?ngeas. A discrimina??o foi realizada por classes,sendo: a classe saud?vel; a classe patologias, composta por todos os sinais patol?gicos selecionados da base SVD; a classe das vozes afetadas apenas por patologias lar?ngeas organofuncionais; e, por fim,a classe de sinais de voz afetados apenas por paralisia das pregas vocais, compondo a categoria de patologia lar?ngea de origem neurol?gica. Foram considerados 04 casos de classifica??o entre os sinais de voz selecionados, sendo eles: saud?vel x patologias, saud?vel x patologias organofuncionais, saud?vel x paralisia das pregas vocais e patologias organofuncionais x paralisia das pregas vocais. Para cada caso discriminativo, 28 classificadores foram implementados e avaliados por meio do F1 score e pelo coeficiente de correla??o de Mathews (CCM) (aplicado apenas na discrimina??o entre as classes patol?gicas), e pelas m?tricas acur?cia, sensibilidade e especificidade. Al?m disso, foram investigados os efeitos da inclus?o de taxas de sobreposi??o (0%,25%,50% e 75%) aplicadas durante a extra??o dos segmentos. A t?cnica de valida??o cruzada k- fold, com k = 10, foi implementada nesta pesquisa para sele??o dos conjuntos de dados de treino e teste. Os resultados indicam que o m?todo proposto possui o seu melhor desempenho na discrimina??o entre vozes saud?veis e afetadas por paralisia das pregas vocais, com base na detec??o de segmentos do sinal de voz sem taxa de sobreposi??o,utilizando o classificador com 4 camadas ocultas,treinado pelo algoritmo de aprendizagem Adadelta,no qual foram obtidos ap?s a valida??o cruzada 88,68 ?3,04% para acur?cia, 92,04 ? 5,82% para sensibilidade, 85,33 ? 6,53% para especificidade e F1 score igual 0,89. Conclui-se que ? poss?vel discriminar vozes saud?veis e afetadas por patologias lar?ngeas, com base na an?lise do comportamento din?mico de segmentos do sinal de voz utilizando DNNs.Instituto Federal da Para?b

    Continuous Emotion Prediction from Speech: Modelling Ambiguity in Emotion

    Full text link
    There is growing interest in emotion research to model perceived emotion labelled as intensities along the affect dimensions such as arousal and valence. These labels are typically obtained from multiple annotators who would have their individualistic perceptions of emotional speech. Consequently, emotion prediction models that incorporate variation in individual perceptions as ambiguity in the emotional state would be more realistic. This thesis develops the modelling framework necessary to achieve continuous prediction of ambiguous emotional states from speech. Besides, emotion labels, feature space distribution and encoding are an integral part of the prediction system. The first part of this thesis examines the limitations of current low-level feature distributions and their minimalistic statistical descriptions. Specifically, front-end paralinguistic acoustic features are reflective of speech production mechanisms. However, discriminatively learnt features have frequently outperformed acoustic features in emotion prediction tasks, but provide no insights into the physical significance of these features. One of the contributions of this thesis is the development of a framework that can modify the acoustic feature representation based on emotion label information. Another investigation in this thesis indicates that emotion perception is language-dependent and in turn, helped develop a framework for cross-language emotion prediction. Furthermore, this investigation supported the hypothesis that emotion perception is highly individualistic and is better modelled as a distribution rather than a point estimate to encode information about the ambiguity in the perceived emotion. Following this observation, the thesis proposes measures to quantify the appropriateness of distribution types in modelling ambiguity in dimensional emotion labels which are then employed to compare well-known bounded parametric distributions. These analyses led to the conclusion that the beta distribution was the most appropriate parametric model of ambiguity in emotion labels. Finally, the thesis focuses on developing a deep learning framework for continuous emotion prediction as a temporal series of beta distributions, examining various parameterizations of the beta distributions as well as loss functions. Furthermore, distribution over the parameter spaces is examined and priors from kernel density estimation are employed to shape the posteriors over the parameter space which significantly improved valence ambiguity predictions. The proposed frameworks and methods have been extensively evaluated on multiple state of-the-art databases and the results demonstrate both the viability of predicting ambiguous emotion states and the validity of the proposed systems

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions

    IberSPEECH 2020: XI Jornadas en Tecnología del Habla and VII Iberian SLTech

    Get PDF
    IberSPEECH2020 is a two-day event, bringing together the best researchers and practitioners in speech and language technologies in Iberian languages to promote interaction and discussion. The organizing committee has planned a wide variety of scientific and social activities, including technical paper presentations, keynote lectures, presentation of projects, laboratories activities, recent PhD thesis, discussion panels, a round table, and awards to the best thesis and papers. The program of IberSPEECH2020 includes a total of 32 contributions that will be presented distributed among 5 oral sessions, a PhD session, and a projects session. To ensure the quality of all the contributions, each submitted paper was reviewed by three members of the scientific review committee. All the papers in the conference will be accessible through the International Speech Communication Association (ISCA) Online Archive. Paper selection was based on the scores and comments provided by the scientific review committee, which includes 73 researchers from different institutions (mainly from Spain and Portugal, but also from France, Germany, Brazil, Iran, Greece, Hungary, Czech Republic, Ucrania, Slovenia). Furthermore, it is confirmed to publish an extension of selected papers as a special issue of the Journal of Applied Sciences, “IberSPEECH 2020: Speech and Language Technologies for Iberian Languages”, published by MDPI with fully open access. In addition to regular paper sessions, the IberSPEECH2020 scientific program features the following activities: the ALBAYZIN evaluation challenge session.Red Española de Tecnologías del Habla. Universidad de Valladoli

    Principled methods for mixtures processing

    Get PDF
    This document is my thesis for getting the habilitation à diriger des recherches, which is the french diploma that is required to fully supervise Ph.D. students. It summarizes the research I did in the last 15 years and also provides the short­term research directions and applications I want to investigate. Regarding my past research, I first describe the work I did on probabilistic audio modeling, including the separation of Gaussian and α­stable stochastic processes. Then, I mention my work on deep learning applied to audio, which rapidly turned into a large effort for community service. Finally, I present my contributions in machine learning, with some works on hardware compressed sensing and probabilistic generative models.My research programme involves a theoretical part that revolves around probabilistic machine learning, and an applied part that concerns the processing of time series arising in both audio and life sciences

    Tracking the Temporal-Evolution of Supernova Bubbles in Numerical Simulations

    Get PDF
    The study of low-dimensional, noisy manifolds embedded in a higher dimensional space has been extremely useful in many applications, from the chemical analysis of multi-phase flows to simulations of galactic mergers. Building a probabilistic model of the manifolds has helped in describing their essential properties and how they vary in space. However, when the manifold is evolving through time, a joint spatio-temporal modelling is needed, in order to fully comprehend its nature. We propose a first-order Markovian process that propagates the spatial probabilistic model of a manifold at fixed time, to its adjacent temporal stages. The proposed methodology is demonstrated using a particle simulation of an interacting dwarf galaxy to describe the evolution of a cavity generated by a Supernov
    corecore