24 research outputs found

    운율 정보를 이용한 마비말장애 음성 자동 검출 및 평가

    Get PDF
    학위논문 (석사) -- 서울대학교 대학원 : 인문대학 언어학과, 2020. 8. Minhwa Chung.말장애는 신경계 또는 퇴행성 질환에서 가장 빨리 나타나는 증 상 중 하나이다. 마비말장애는 파킨슨병, 뇌성 마비, 근위축성 측삭 경화증, 다발성 경화증 환자 등 다양한 환자군에서 나타난다. 마비말장애는 조음기관 신경의 손상으로 부정확한 조음을 주요 특징으로 가지고, 운율에도 영향을 미치는 것으로 보고된다. 선행 연구에서는 운율 기반 측정치를 비장애 발화와 마비말장애 발화를 구별하는 것에 사용했다. 임상 현장에서는 마비말장애에 대한 운율 기반 분석이 마비말장애를 진단하거나 장애 양상에 따른 알맞은 치료법을 준비하는 것에 도움이 될 것이다. 따라서 마비말장애가 운율에 영향을 미치는 양상뿐만 아니라 마비말장애의 운율 특징을 긴밀하게 살펴보는 것이 필요하다. 구체 적으로, 운율이 어떤 측면에서 마비말장애에 영향을 받는지, 그리고 운율 애가 장애 정도에 따라 어떻게 다르게 나타나는지에 대한 분석이 필요하다. 본 논문은 음높이, 음질, 말속도, 리듬 등 운율을 다양한 측면에 서 살펴보고, 마비말장애 검출 및 평가에 사용하였다. 추출된 운율 특징들은 몇 가지 특징 선택 알고리즘을 통해 최적화되어 머신러닝 기반 분류기의 입력값으로 사용되었다. 분류기의 성능은 정확도, 정밀도, 재현율, F1-점수로 평가되었다. 또한, 본 논문은 장애 중증도(경도, 중등도, 심도)에 따라 운율 정보 사용의 유용성을 분석하였다. 마지막으로, 장애 발화 수집이 어려운 만큼, 본 연구는 교차 언어 분류기를 사용하였다. 한국어와 영어 장애 발화가 훈련 셋으로 사용되었으며, 테스트셋으로는 각 목표 언어만이 사용되었다. 실험 결과는 다음과 같이 세 가지를 시사한다. 첫째, 운율 정보 를 사용하는 것은 마비말장애 검출 및 평가에 도움이 된다. MFCC 만을 사용했을 때와 비교했을 때, 운율 정보를 함께 사용하는 것이 한국어와 영어 데이터셋 모두에서 도움이 되었다. 둘째, 운율 정보는 평가에 특히 유용하다. 영어의 경우 검출과 평가에서 각각 1.82%와 20.6%의 상대적 정확도 향상을 보였다. 한국어의 경우 검출에서는 향상을 보이지 않았지만, 평가에서는 13.6%의 상대적 향상이 나타났다. 셋째, 교차 언어 분류기는 단일 언어 분류기보다 향상된 결과를 보인다. 실험 결과 교차언어 분류기는 단일 언어 분류기와 비교했을 때 상대적으로 4.12% 높은 정확도를 보였다. 이것은 특정 운율 장애는 범언어적 특징을 가지며, 다른 언어 데이터를 포함시켜 데이터가 부족한 훈련 셋을 보완할 수 있 음을 시사한다.One of the earliest cues for neurological or degenerative disorders are speech impairments. Individuals with Parkinsons Disease, Cerebral Palsy, Amyotrophic lateral Sclerosis, Multiple Sclerosis among others are often diagnosed with dysarthria. Dysarthria is a group of speech disorders mainly affecting the articulatory muscles which eventually leads to severe misarticulation. However, impairments in the suprasegmental domain are also present and previous studies have shown that the prosodic patterns of speakers with dysarthria differ from the prosody of healthy speakers. In a clinical setting, a prosodic-based analysis of dysarthric speech can be helpful for diagnosing the presence of dysarthria. Therefore, there is a need to not only determine how the prosody of speech is affected by dysarthria, but also what aspects of prosody are more affected and how prosodic impairments change by the severity of dysarthria. In the current study, several prosodic features related to pitch, voice quality, rhythm and speech rate are used as features for detecting dysarthria in a given speech signal. A variety of feature selection methods are utilized to determine which set of features are optimal for accurate detection. After selecting an optimal set of prosodic features we use them as input to machine learning-based classifiers and assess the performance using the evaluation metrics: accuracy, precision, recall and F1-score. Furthermore, we examine the usefulness of prosodic measures for assessing different levels of severity (e.g. mild, moderate, severe). Finally, as collecting impaired speech data can be difficult, we also implement cross-language classifiers where both Korean and English data are used for training but only one language used for testing. Results suggest that in comparison to solely using Mel-frequency cepstral coefficients, including prosodic measurements can improve the accuracy of classifiers for both Korean and English datasets. In particular, large improvements were seen when assessing different severity levels. For English a relative accuracy improvement of 1.82% for detection and 20.6% for assessment was seen. The Korean dataset saw no improvements for detection but a relative improvement of 13.6% for assessment. The results from cross-language experiments showed a relative improvement of up to 4.12% in comparison to only using a single language during training. It was found that certain prosodic impairments such as pitch and duration may be language independent. Therefore, when training sets of individual languages are limited, they may be supplemented by including data from other languages.1. Introduction 1 1.1. Dysarthria 1 1.2. Impaired Speech Detection 3 1.3. Research Goals & Outline 6 2. Background Research 8 2.1. Prosodic Impairments 8 2.1.1. English 8 2.1.2. Korean 10 2.2. Machine Learning Approaches 12 3. Database 18 3.1. English-TORGO 20 3.2. Korean-QoLT 21 4. Methods 23 4.1. Prosodic Features 23 4.1.1. Pitch 23 4.1.2. Voice Quality 26 4.1.3. Speech Rate 29 4.1.3. Rhythm 30 4.2. Feature Selection 34 4.3. Classification Models 38 4.3.1. Random Forest 38 4.3.1. Support Vector Machine 40 4.3.1 Feed-Forward Neural Network 42 4.4. Mel-Frequency Cepstral Coefficients 43 5. Experiment 46 5.1. Model Parameters 47 5.2. Training Procedure 48 5.2.1. Dysarthria Detection 48 5.2.2. Severity Assessment 50 5.2.3. Cross-Language 51 6. Results 52 6.1. TORGO 52 6.1.1. Dysarthria Detection 52 6.1.2. Severity Assessment 56 6.2. QoLT 57 6.2.1. Dysarthria Detection 57 6.2.2. Severity Assessment 58 6.1. Cross-Language 59 7. Discussion 62 7.1. Linguistic Implications 62 7.2. Clinical Applications 65 8. Conclusion 67 References 69 Appendix 76 Abstract in Korean 79Maste

    Deep Transfer Learning for Automatic Speech Recognition: Towards Better Generalization

    Full text link
    Automatic speech recognition (ASR) has recently become an important challenge when using deep learning (DL). It requires large-scale training datasets and high computational and storage resources. Moreover, DL techniques and machine learning (ML) approaches in general, hypothesize that training and testing data come from the same domain, with the same input feature space and data distribution characteristics. This assumption, however, is not applicable in some real-world artificial intelligence (AI) applications. Moreover, there are situations where gathering real data is challenging, expensive, or rarely occurring, which can not meet the data requirements of DL models. deep transfer learning (DTL) has been introduced to overcome these issues, which helps develop high-performing models using real datasets that are small or slightly different but related to the training data. This paper presents a comprehensive survey of DTL-based ASR frameworks to shed light on the latest developments and helps academics and professionals understand current challenges. Specifically, after presenting the DTL background, a well-designed taxonomy is adopted to inform the state-of-the-art. A critical analysis is then conducted to identify the limitations and advantages of each framework. Moving on, a comparative study is introduced to highlight the current challenges before deriving opportunities for future research

    A review on deep-learning-based cyberbullying detection

    Get PDF
    Bullying is described as an undesirable behavior by others that harms an individual physically, mentally, or socially. Cyberbullying is a virtual form (e.g., textual or image) of bullying or harassment, also known as online bullying. Cyberbullying detection is a pressing need in today’s world, as the prevalence of cyberbullying is continually growing, resulting in mental health issues. Conventional machine learning models were previously used to identify cyberbullying. However, current research demonstrates that deep learning surpasses traditional machine learning algorithms in identifying cyberbullying for several reasons, including handling extensive data, efficiently classifying text and images, extracting features automatically through hidden layers, and many others. This paper reviews the existing surveys and identifies the gaps in those studies. We also present a deep-learning-based defense ecosystem for cyberbullying detection, including data representation techniques and different deep-learning-based models and frameworks. We have critically analyzed the existing DL-based cyberbullying detection techniques and identified their significant contributions and the future research directions they have presented. We have also summarized the datasets being used, including the DL architecture being used and the tasks that are accomplished for each dataset. Finally, several challenges faced by the existing researchers and the open issues to be addressed in the future have been presented

    Modélisation de l'indice de sévérité du trouble de la parole à l'aide de méthodes d'apprentissage profond : d'une modélisation à partir de quelques exemples à un apprentissage auto-supervisé via une mesure entropique

    Get PDF
    Les personnes atteintes de cancers des voies aérodigestives supérieures présentent des difficultés de prononciation après des chirurgies ou des radiothérapies. Il est important pour le praticien de pouvoir disposer d'une mesure reflétant la sévérité de la parole. Pour produire cette mesure, il est communément pratiqué une étude perceptive qui rassemble un groupe de cinq à six experts cliniques. Ce procédé limite l'usage de cette évaluation en pratique. Ainsi, la création d'une mesure automatique, semblable à l'indice de sévérité, permettrait un meilleur suivi des patients en facilitant son obtention. Pour réaliser une telle mesure, nous nous sommes appuyés sur une tâche de lecture, classiquement réalisée. Nous avons utilisé les enregistrements du corpus C2SI-RUGBI qui rassemble plus de 100 personnes. Ce corpus représente environ une heure d'enregistrement pour modéliser l'indice de sévérité. Dans ce travail de doctorat, une revue des méthodes de l'état de l'art sur la reconnaissance de la parole, des émotions et du locuteur utilisant peu de données a été entreprise. Nous avons ensuite essayé de modéliser la sévérité à l'aide d'apprentissage par transfert et par apprentissage profond. Les résultats étant non utilisables, nous nous sommes tourné sur les techniques dites "few shot" (apprentissage à partir de quelques exemples seulement). Ainsi, après de premiers essais prometteurs sur la reconnaissance de phonèmes, nous avons obtenu des résultats prometteurs pour catégoriser la sévérité des patients. Néanmoins, l'exploitation de ces résultats pour une application médicale demanderait des améliorations. Nous avons donc réalisé des projections des données de notre corpus. Comme certaines tranches de scores étaient séparables à l'aide de paramètres acoustiques, nous avons proposé une nouvelle méthode de mesure entropique. Celle-ci est fondée sur des représentations de la parole autoapprise sur le corpus Librispeech : le modèle PASE+, qui est inspiré de l'Inception Score (généralement utilisé en image pour évaluer la qualité des images générées par les modèles). Notre méthode nous permet de produire un score semblable à l'indice de sévérité avec une corrélation de Spearman de 0,87 sur la tâche de lecture du corpus cancer. L'avantage de notre approche est qu'elle ne nécessite pas des données du corpus C2SI-RUGBI pour l'apprentissage. Ainsi, nous pouvons utiliser l'entièreté du corpus pour l'évaluation de notre système. La qualité de nos résultats nous a permis d'envisager une utilisation en milieu clinique à travers une application sur tablette : des tests sont d'ailleurs en cours à l'hôpital Larrey de Toulouse.People with head and neck cancers have speech difficulties after surgery or radiation therapy. It is important for health practitioners to have a measure that reflects the severity of speech. To produce this measure, a perceptual study is commonly performed with a group of five to six clinical experts. This process limits the use of this assessment in practice. Thus, the creation of an automatic measure, similar to the severity index, would allow a better follow-up of the patients by facilitating its obtaining. To realise such a measure, we relied on a reading task, classically performed. We used the recordings of the C2SI-RUGBI corpus, which includes more than 100 people. This corpus represents about one hour of recording to model the severity index. In this PhD work, a review of state-of-the-art methods on speech, emotion and speaker recognition using little data was undertaken. We then attempted to model severity using transfer learning and deep learning. Since the results were not usable, we turned to the so-called "few shot" techniques (learning from only a few examples). Thus, after promising first attempts at phoneme recognition, we obtained promising results for categorising the severity of patients. Nevertheless, the exploitation of these results for a medical application would require improvements. We therefore performed projections of the data from our corpus. As some score slices were separable using acoustic parameters, we proposed a new entropic measurement method. This one is based on self-supervised speech representations on the Librispeech corpus: the PASE+ model, which is inspired by the Inception Score (generally used in image processing to evaluate the quality of images generated by models). Our method allows us to produce a score similar to the severity index with a Spearman correlation of 0.87 on the reading task of the cancer corpus. The advantage of our approach is that it does not require data from the C2SI-RUGBI corpus for training. Thus, we can use the whole corpus for the evaluation of our system. The quality of our results has allowed us to consider a use in a clinical environment through an application on a tablet: tests are underway at the Larrey Hospital in Toulouse
    corecore