57 research outputs found

    Multi-View Multi-Task Representation Learning for Mispronunciation Detection

    Full text link
    The disparity in phonology between learner's native (L1) and target (L2) language poses a significant challenge for mispronunciation detection and diagnosis (MDD) systems. This challenge is further intensified by lack of annotated L2 data. This paper proposes a novel MDD architecture that exploits multiple `views' of the same input data assisted by auxiliary tasks to learn more distinctive phonetic representation in a low-resource setting. Using the mono- and multilingual encoders, the model learn multiple views of the input, and capture the sound properties across diverse languages and accents. These encoded representations are further enriched by learning articulatory features in a multi-task setup. Our reported results using the L2-ARCTIC data outperformed the SOTA models, with a phoneme error rate reduction of 11.13% and 8.60% and absolute F1 score increase of 5.89%, and 2.49% compared to the single-view mono- and multilingual systems, with a limited L2 dataset.Comment: 5 page

    MISPRONUNCIATION DETECTION AND DIAGNOSIS IN MANDARIN ACCENTED ENGLISH SPEECH

    Get PDF
    This work presents the development, implementation, and evaluation of a Mispronunciation Detection and Diagnosis (MDD) system, with application to pronunciation evaluation of Mandarin-accented English speech. A comprehensive detection and diagnosis of errors in the Electromagnetic Articulography corpus of Mandarin-Accented English (EMA-MAE) was performed by using the expert phonetic transcripts and an Automatic Speech Recognition (ASR) system. Articulatory features derived from the parallel kinematic data available in the EMA-MAE corpus were used to identify the most significant articulatory error patterns seen in L2 speakers during common mispronunciations. Using both acoustic and articulatory information, an ASR based Mispronunciation Detection and Diagnosis (MDD) system was built and evaluated across different feature combinations and Deep Neural Network (DNN) architectures. The MDD system captured mispronunciation errors with a detection accuracy of 82.4%, a diagnostic accuracy of 75.8% and a false rejection rate of 17.2%. The results demonstrate the advantage of using articulatory features in revealing the significant contributors of mispronunciation as well as improving the performance of MDD systems

    SpeechBlender: Speech Augmentation Framework for Mispronunciation Data Generation

    Full text link
    One of the biggest challenges in designing mispronunciation detection models is the unavailability of labeled L2 speech data. To overcome such data scarcity, we introduce SpeechBlender -- a fine-grained data augmentation pipeline for generating mispronunciation errors. The SpeechBlender utilizes varieties of masks to target different regions of a phonetic unit, and use the mixing factors to linearly interpolate raw speech signals while generating erroneous pronunciation instances. The masks facilitate smooth blending of the signals, thus generating more effective samples than the `Cut/Paste' method. We show the effectiveness of our augmentation technique in a phoneme-level pronunciation quality assessment task, leveraging only a good pronunciation dataset. With SpeechBlender augmentation, we observed a 3% and 2% increase in Pearson correlation coefficient (PCC) compared to no-augmentation and goodness of pronunciation augmentation scenarios respectively for Speechocean762 testset. Moreover, a 2% rise in PCC is observed when comparing our single-task phoneme-level mispronunciation detection model with a multi-task learning model using multiple-granularity information.Comment: 5 pages, submitted to ICASSP 202

    A comparison-based approach to mispronunciation detection

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 89-92).This thesis focuses on the problem of detecting word-level mispronunciations in nonnative speech. Conventional automatic speech recognition-based mispronunciation detection systems have the disadvantage of requiring a large amount of language-specific, annotated training data. Some systems even require a speech recognizer in the target language and another one in the students' native language. To reduce human labeling effort and for generalization across all languages, we propose a comparison-based framework which only requires word-level timing information from the native training data. With the assumption that the student is trying to enunciate the given script, dynamic time warping (DTW) is carried out between a student's utterance (nonnative speech) and a teacher's utterance (native speech), and we focus on detecting mis-alignment in the warping path and the distance matrix. The first stage of the system locates word boundaries in the nonnative utterance. To handle the problem that nonnative speech often contains intra-word pauses, we run DTW with a silence model which can align the two utterances, detect and remove silences at the same time. In order to segment each word into smaller, acoustically similar, units for a finer-grained analysis, we develop a phoneme-like unit segmentor which works by segmenting the selfsimilarity matrix into low-distance regions along the diagonal. Both phone-level and wordlevel features that describe the degree of mis-alignment between the two utterances are extracted, and the problem is formulated as a classification task. SVM classifiers are trained, and three voting schemes are considered for the cases where there are more than one matching reference utterance. The system is evaluated on the Chinese University Chinese Learners of English (CUCHLOE) corpus, and the TIMIT corpus is used as the native corpus. Experimental results have shown 1) the effectiveness of the silence model in guiding DTW to capture the word boundaries in nonnative speech more accurately, 2) the complimentary performance of the word-level and the phone-level features, and 3) the stable performance of the system with or without phonetic units labeling.by Ann Lee.S.M

    CAPT๋ฅผ ์œ„ํ•œ ๋ฐœ์Œ ๋ณ€์ด ๋ถ„์„ ๋ฐ CycleGAN ๊ธฐ๋ฐ˜ ํ”ผ๋“œ๋ฐฑ ์ƒ์„ฑ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :์ธ๋ฌธ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ์ธ์ง€๊ณผํ•™์ „๊ณต,2020. 2. ์ •๋ฏผํ™”.Despite the growing popularity in learning Korean as a foreign language and the rapid development in language learning applications, the existing computer-assisted pronunciation training (CAPT) systems in Korean do not utilize linguistic characteristics of non-native Korean speech. Pronunciation variations in non-native speech are far more diverse than those observed in native speech, which may pose a difficulty in combining such knowledge in an automatic system. Moreover, most of the existing methods rely on feature extraction results from signal processing, prosodic analysis, and natural language processing techniques. Such methods entail limitations since they necessarily depend on finding the right features for the task and the extraction accuracies. This thesis presents a new approach for corrective feedback generation in a CAPT system, in which pronunciation variation patterns and linguistic correlates with accentedness are analyzed and combined with a deep neural network approach, so that feature engineering efforts are minimized while maintaining the linguistically important factors for the corrective feedback generation task. Investigations on non-native Korean speech characteristics in contrast with those of native speakers, and their correlation with accentedness judgement show that both segmental and prosodic variations are important factors in a Korean CAPT system. The present thesis argues that the feedback generation task can be interpreted as a style transfer problem, and proposes to evaluate the idea using generative adversarial network. A corrective feedback generation model is trained on 65,100 read utterances by 217 non-native speakers of 27 mother tongue backgrounds. The features are automatically learnt in an unsupervised way in an auxiliary classifier CycleGAN setting, in which the generator learns to map a foreign accented speech to native speech distributions. In order to inject linguistic knowledge into the network, an auxiliary classifier is trained so that the feedback also identifies the linguistic error types that were defined in the first half of the thesis. The proposed approach generates a corrected version the speech using the learners own voice, outperforming the conventional Pitch-Synchronous Overlap-and-Add method.์™ธ๊ตญ์–ด๋กœ์„œ์˜ ํ•œ๊ตญ์–ด ๊ต์œก์— ๋Œ€ํ•œ ๊ด€์‹ฌ์ด ๊ณ ์กฐ๋˜์–ด ํ•œ๊ตญ์–ด ํ•™์Šต์ž์˜ ์ˆ˜๊ฐ€ ํฌ๊ฒŒ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ์Œ์„ฑ์–ธ์–ด์ฒ˜๋ฆฌ ๊ธฐ์ˆ ์„ ์ ์šฉํ•œ ์ปดํ“จํ„ฐ ๊ธฐ๋ฐ˜ ๋ฐœ์Œ ๊ต์œก(Computer-Assisted Pronunciation Training; CAPT) ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์— ๋Œ€ํ•œ ์—ฐ๊ตฌ ๋˜ํ•œ ์ ๊ทน์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง€๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿผ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ํ˜„์กดํ•˜๋Š” ํ•œ๊ตญ์–ด ๋งํ•˜๊ธฐ ๊ต์œก ์‹œ์Šคํ…œ์€ ์™ธ๊ตญ์ธ์˜ ํ•œ๊ตญ์–ด์— ๋Œ€ํ•œ ์–ธ์–ดํ•™์  ํŠน์ง•์„ ์ถฉ๋ถ„ํžˆ ํ™œ์šฉํ•˜์ง€ ์•Š๊ณ  ์žˆ์œผ๋ฉฐ, ์ตœ์‹  ์–ธ์–ด์ฒ˜๋ฆฌ ๊ธฐ์ˆ  ๋˜ํ•œ ์ ์šฉ๋˜์ง€ ์•Š๊ณ  ์žˆ๋Š” ์‹ค์ •์ด๋‹ค. ๊ฐ€๋Šฅํ•œ ์›์ธ์œผ๋กœ์จ๋Š” ์™ธ๊ตญ์ธ ๋ฐœํ™” ํ•œ๊ตญ์–ด ํ˜„์ƒ์— ๋Œ€ํ•œ ๋ถ„์„์ด ์ถฉ๋ถ„ํ•˜๊ฒŒ ์ด๋ฃจ์–ด์ง€์ง€ ์•Š์•˜๋‹ค๋Š” ์ , ๊ทธ๋ฆฌ๊ณ  ๊ด€๋ จ ์—ฐ๊ตฌ๊ฐ€ ์žˆ์–ด๋„ ์ด๋ฅผ ์ž๋™ํ™”๋œ ์‹œ์Šคํ…œ์— ๋ฐ˜์˜ํ•˜๊ธฐ์—๋Š” ๊ณ ๋„ํ™”๋œ ์—ฐ๊ตฌ๊ฐ€ ํ•„์š”ํ•˜๋‹ค๋Š” ์ ์ด ์žˆ๋‹ค. ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ CAPT ๊ธฐ์ˆ  ์ „๋ฐ˜์ ์œผ๋กœ๋Š” ์‹ ํ˜ธ์ฒ˜๋ฆฌ, ์šด์œจ ๋ถ„์„, ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๊ธฐ๋ฒ•๊ณผ ๊ฐ™์€ ํŠน์ง• ์ถ”์ถœ์— ์˜์กดํ•˜๊ณ  ์žˆ์–ด์„œ ์ ํ•ฉํ•œ ํŠน์ง•์„ ์ฐพ๊ณ  ์ด๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์ถ”์ถœํ•˜๋Š” ๋ฐ์— ๋งŽ์€ ์‹œ๊ฐ„๊ณผ ๋…ธ๋ ฅ์ด ํ•„์š”ํ•œ ์‹ค์ •์ด๋‹ค. ์ด๋Š” ์ตœ์‹  ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์–ธ์–ด์ฒ˜๋ฆฌ ๊ธฐ์ˆ ์„ ํ™œ์šฉํ•จ์œผ๋กœ์จ ์ด ๊ณผ์ • ๋˜ํ•œ ๋ฐœ์ „์˜ ์—ฌ์ง€๊ฐ€ ๋งŽ๋‹ค๋Š” ๋ฐ”๋ฅผ ์‹œ์‚ฌํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ๋Š” ๋จผ์ € CAPT ์‹œ์Šคํ…œ ๊ฐœ๋ฐœ์— ์žˆ์–ด ๋ฐœ์Œ ๋ณ€์ด ์–‘์ƒ๊ณผ ์–ธ์–ดํ•™์  ์ƒ๊ด€๊ด€๊ณ„๋ฅผ ๋ถ„์„ํ•˜์˜€๋‹ค. ์™ธ๊ตญ์ธ ํ™”์ž๋“ค์˜ ๋‚ญ๋…์ฒด ๋ณ€์ด ์–‘์ƒ๊ณผ ํ•œ๊ตญ์–ด ์›์–ด๋ฏผ ํ™”์ž๋“ค์˜ ๋‚ญ๋…์ฒด ๋ณ€์ด ์–‘์ƒ์„ ๋Œ€์กฐํ•˜๊ณ  ์ฃผ์š”ํ•œ ๋ณ€์ด๋ฅผ ํ™•์ธํ•œ ํ›„, ์ƒ๊ด€๊ด€๊ณ„ ๋ถ„์„์„ ํ†ตํ•˜์—ฌ ์˜์‚ฌ์†Œํ†ต์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ์ค‘์š”๋„๋ฅผ ํŒŒ์•…ํ•˜์˜€๋‹ค. ๊ทธ ๊ฒฐ๊ณผ, ์ข…์„ฑ ์‚ญ์ œ์™€ 3์ค‘ ๋Œ€๋ฆฝ์˜ ํ˜ผ๋™, ์ดˆ๋ถ„์ ˆ ๊ด€๋ จ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ๊ฒฝ์šฐ ํ”ผ๋“œ๋ฐฑ ์ƒ์„ฑ์— ์šฐ์„ ์ ์œผ๋กœ ๋ฐ˜์˜ํ•˜๋Š” ๊ฒƒ์ด ํ•„์š”ํ•˜๋‹ค๋Š” ๊ฒƒ์ด ํ™•์ธ๋˜์—ˆ๋‹ค. ๊ต์ •๋œ ํ”ผ๋“œ๋ฐฑ์„ ์ž๋™์œผ๋กœ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์€ CAPT ์‹œ์Šคํ…œ์˜ ์ค‘์š”ํ•œ ๊ณผ์ œ ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ด ๊ณผ์ œ๊ฐ€ ๋ฐœํ™”์˜ ์Šคํƒ€์ผ ๋ณ€ํ™”์˜ ๋ฌธ์ œ๋กœ ํ•ด์„์ด ๊ฐ€๋Šฅํ•˜๋‹ค๊ณ  ๋ณด์•˜์œผ๋ฉฐ, ์ƒ์„ฑ์  ์ ๋Œ€ ์‹ ๊ฒฝ๋ง (Cycle-consistent Generative Adversarial Network; CycleGAN) ๊ตฌ์กฐ์—์„œ ๋ชจ๋ธ๋งํ•˜๋Š” ๊ฒƒ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. GAN ๋„คํŠธ์›Œํฌ์˜ ์ƒ์„ฑ๋ชจ๋ธ์€ ๋น„์›์–ด๋ฏผ ๋ฐœํ™”์˜ ๋ถ„ํฌ์™€ ์›์–ด๋ฏผ ๋ฐœํ™” ๋ถ„ํฌ์˜ ๋งคํ•‘์„ ํ•™์Šตํ•˜๋ฉฐ, Cycle consistency ์†์‹คํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ ๋ฐœํ™”๊ฐ„ ์ „๋ฐ˜์ ์ธ ๊ตฌ์กฐ๋ฅผ ์œ ์ง€ํ•จ๊ณผ ๋™์‹œ์— ๊ณผ๋„ํ•œ ๊ต์ •์„ ๋ฐฉ์ง€ํ•˜์˜€๋‹ค. ๋ณ„๋„์˜ ํŠน์ง• ์ถ”์ถœ ๊ณผ์ •์ด ์—†์ด ํ•„์š”ํ•œ ํŠน์ง•๋“ค์ด CycleGAN ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ๋ฌด๊ฐ๋… ๋ฐฉ๋ฒ•์œผ๋กœ ์Šค์Šค๋กœ ํ•™์Šต๋˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ, ์–ธ์–ด ํ™•์žฅ์ด ์šฉ์ดํ•œ ๋ฐฉ๋ฒ•์ด๋‹ค. ์–ธ์–ดํ•™์  ๋ถ„์„์—์„œ ๋“œ๋Ÿฌ๋‚œ ์ฃผ์š”ํ•œ ๋ณ€์ด๋“ค ๊ฐ„์˜ ์šฐ์„ ์ˆœ์œ„๋Š” Auxiliary Classifier CycleGAN ๊ตฌ์กฐ์—์„œ ๋ชจ๋ธ๋งํ•˜๋Š” ๊ฒƒ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ ๊ธฐ์กด์˜ CycleGAN์— ์ง€์‹์„ ์ ‘๋ชฉ์‹œ์ผœ ํ”ผ๋“œ๋ฐฑ ์Œ์„ฑ์„ ์ƒ์„ฑํ•จ๊ณผ ๋™์‹œ์— ํ•ด๋‹น ํ”ผ๋“œ๋ฐฑ์ด ์–ด๋–ค ์œ ํ˜•์˜ ์˜ค๋ฅ˜์ธ์ง€ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฌธ์ œ๋ฅผ ์ˆ˜ํ–‰ํ•œ๋‹ค. ์ด๋Š” ๋„๋ฉ”์ธ ์ง€์‹์ด ๊ต์ • ํ”ผ๋“œ๋ฐฑ ์ƒ์„ฑ ๋‹จ๊ณ„๊นŒ์ง€ ์œ ์ง€๋˜๊ณ  ํ†ต์ œ๊ฐ€ ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ์žฅ์ ์ด ์žˆ๋‹ค๋Š” ๋ฐ์— ๊ทธ ์˜์˜๊ฐ€ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด์„œ 27๊ฐœ์˜ ๋ชจ๊ตญ์–ด๋ฅผ ๊ฐ–๋Š” 217๋ช…์˜ ์œ ์˜๋ฏธ ์–ดํœ˜ ๋ฐœํ™” 65,100๊ฐœ๋กœ ํ”ผ๋“œ๋ฐฑ ์ž๋™ ์ƒ์„ฑ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ , ๊ฐœ์„  ์—ฌ๋ถ€ ๋ฐ ์ •๋„์— ๋Œ€ํ•œ ์ง€๊ฐ ํ‰๊ฐ€๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์˜€์„ ๋•Œ ํ•™์Šต์ž ๋ณธ์ธ์˜ ๋ชฉ์†Œ๋ฆฌ๋ฅผ ์œ ์ง€ํ•œ ์ฑ„ ๊ต์ •๋œ ๋ฐœ์Œ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€๋Šฅํ•˜๋ฉฐ, ์ „ํ†ต์ ์ธ ๋ฐฉ๋ฒ•์ธ ์Œ๋†’์ด ๋™๊ธฐ์‹ ์ค‘์ฒฉ๊ฐ€์‚ฐ (Pitch-Synchronous Overlap-and-Add) ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋น„ํ•ด ์ƒ๋Œ€ ๊ฐœ์„ ๋ฅ  16.67%์ด ํ™•์ธ๋˜์—ˆ๋‹ค.Chapter 1. Introduction 1 1.1. Motivation 1 1.1.1. An Overview of CAPT Systems 3 1.1.2. Survey of existing Korean CAPT Systems 5 1.2. Problem Statement 7 1.3. Thesis Structure 7 Chapter 2. Pronunciation Analysis of Korean Produced by Chinese 9 2.1. Comparison between Korean and Chinese 11 2.1.1. Phonetic and Syllable Structure Comparisons 11 2.1.2. Phonological Comparisons 14 2.2. Related Works 16 2.3. Proposed Analysis Method 19 2.3.1. Corpus 19 2.3.2. Transcribers and Agreement Rates 22 2.4. Salient Pronunciation Variations 22 2.4.1. Segmental Variation Patterns 22 2.4.1.1. Discussions 25 2.4.2. Phonological Variation Patterns 26 2.4.1.2. Discussions 27 2.5. Summary 29 Chapter 3. Correlation Analysis of Pronunciation Variations and Human Evaluation 30 3.1. Related Works 31 3.1.1. Criteria used in L2 Speech 31 3.1.2. Criteria used in L2 Korean Speech 32 3.2. Proposed Human Evaluation Method 36 3.2.1. Reading Prompt Design 36 3.2.2. Evaluation Criteria Design 37 3.2.3. Raters and Agreement Rates 40 3.3. Linguistic Factors Affecting L2 Korean Accentedness 41 3.3.1. Pearsons Correlation Analysis 41 3.3.2. Discussions 42 3.3.3. Implications for Automatic Feedback Generation 44 3.4. Summary 45 Chapter 4. Corrective Feedback Generation for CAPT 46 4.1. Related Works 46 4.1.1. Prosody Transplantation 47 4.1.2. Recent Speech Conversion Methods 49 4.1.3. Evaluation of Corrective Feedback 50 4.2. Proposed Method: Corrective Feedback as a Style Transfer 51 4.2.1. Speech Analysis at Spectral Domain 53 4.2.2. Self-imitative Learning 55 4.2.3. An Analogy: CAPT System and GAN Architecture 57 4.3. Generative Adversarial Networks 59 4.3.1. Conditional GAN 61 4.3.2. CycleGAN 62 4.4. Experiment 63 4.4.1. Corpus 64 4.4.2. Baseline Implementation 65 4.4.3. Adversarial Training Implementation 65 4.4.4. Spectrogram-to-Spectrogram Training 66 4.5. Results and Evaluation 69 4.5.1. Spectrogram Generation Results 69 4.5.2. Perceptual Evaluation 70 4.5.3. Discussions 72 4.6. Summary 74 Chapter 5. Integration of Linguistic Knowledge in an Auxiliary Classifier CycleGAN for Feedback Generation 75 5.1. Linguistic Class Selection 75 5.2. Auxiliary Classifier CycleGAN Design 77 5.3. Experiment and Results 80 5.3.1. Corpus 80 5.3.2. Feature Annotations 81 5.3.3. Experiment Setup 81 5.3.4. Results 82 5.4. Summary 84 Chapter 6. Conclusion 86 6.1. Thesis Results 86 6.2. Thesis Contributions 88 6.3. Recommendations for Future Work 89 Bibliography 91 Appendix 107 Abstract in Korean 117 Acknowledgments 120Docto

    Microsoft Reading Progress as Capt Tool

    Get PDF
    The paper explores the accuracy of feedback provided to non-native learners of English by a pronunciation module included in Microsoft Reading Progress. We compared pronunciation assessment offered by Reading Progress against two university pronunciation teachers. Recordings from students of English who aim for native-like pronunciation were assessed independently by Reading Progress and the human raters. The output was standardized as negative binary feedback assigned to orthographic words, which matches the Microsoft format. Our results indicate that Reading Progress is not yet ready to be used as a CAPT tool. Inter-rater reliability analysis showed a moderate level of agreement for all raters and a good level of agreement upon eliminating feedback from Reading Progress. Meanwhile, the qualitative analysis revealed certain problems, notably false positives, i.e., words pronounced within the boundaries of academic pronunciation standards, but still marked as incorrect by the digital rater. We recommend that EFL teachers and researchers approach the current version of Reading Progress with caution, especially as regards automated feedback. However, its design may still be useful for manual feedback. Given Microsoft declarations that Reading Progress would be developed to include more accents, it has the potential to evolve into a fully-functional CAPT tool for EFL pedagogy and research

    Automatic Screening of Childhood Speech Sound Disorders and Detection of Associated Pronunciation Errors

    Full text link
    Speech disorders in children can affect their fluency and intelligibility. Delay in their diagnosis and treatment increases the risk of social impairment and learning disabilities. With the significant shortage of Speech and Language Pathologists (SLPs), there is an increasing interest in Computer-Aided Speech Therapy tools with automatic detection and diagnosis capability. However, the scarcity and unreliable annotation of disordered child speech corpora along with the high acoustic variations in the child speech data has impeded the development of reliable automatic detection and diagnosis of childhood speech sound disorders. Therefore, this thesis investigates two types of detection systems that can be achieved with minimum dependency on annotated mispronounced speech data. First, a novel approach that adopts paralinguistic features which represent the prosodic, spectral, and voice quality characteristics of the speech was proposed to perform segment- and subject-level classification of Typically Developing (TD) and Speech Sound Disordered (SSD) child speech using a binary Support Vector Machine (SVM) classifier. As paralinguistic features are both language- and content-independent, they can be extracted from an unannotated speech signal. Second, a novel Mispronunciation Detection and Diagnosis (MDD) approach was introduced to detect the pronunciation errors made due to SSDs and provide low-level diagnostic information that can be used in constructing formative feedback and a detailed diagnostic report. Unlike existing MDD methods where detection and diagnosis are performed at the phoneme level, the proposed method achieved MDD at the speech attribute level, namely the manners and places of articulations. The speech attribute features describe the involved articulators and their interactions when making a speech sound allowing a low-level description of the pronunciation error to be provided. Two novel methods to model speech attributes are further proposed in this thesis, a frame-based (phoneme-alignment) method leveraging the Multi-Task Learning (MTL) criterion and training a separate model for each attribute, and an alignment-free jointly-learnt method based on the Connectionist Temporal Classification (CTC) sequence to sequence criterion. The proposed techniques have been evaluated using standard and publicly accessible adult and child speech corpora, while the MDD method has been validated using L2 speech corpora

    Apraxia World: Deploying a Mobile Game and Automatic Speech Recognition for Independent Child Speech Therapy

    Get PDF
    Children with speech sound disorders typically improve pronunciation quality by undergoing speech therapy, which must be delivered frequently and with high intensity to be effective. As such, clinic sessions are supplemented with home practice, often under caregiver supervision. However, traditional home practice can grow boring for children due to monotony. Furthermore, practice frequency is limited by caregiver availability, making it difficult for some children to reach therapy dosage. To address these issues, this dissertation presents a novel speech therapy game to increase engagement, and explores automatic pronunciation evaluation techniques to afford children independent practice. Children with speech sound disorders typically improve pronunciation quality by undergoing speech therapy, which must be delivered frequently and with high intensity to be effective. As such, clinic sessions are supplemented with home practice, often under caregiver supervision. However, traditional home practice can grow boring for children due to monotony. Furthermore, practice frequency is limited by caregiver availability, making it difficult for some children to reach therapy dosage. To address these issues, this dissertation presents a novel speech therapy game to increase engagement, and explores automatic pronunciation evaluation techniques to afford children independent practice. The therapy game, called Apraxia World, delivers customizable, repetition-based speech therapy while children play through platformer-style levels using typical on-screen tablet controls; children complete in-game speech exercises to collect assets required to progress through the levels. Additionally, Apraxia World provides pronunciation feedback according to an automated pronunciation evaluation system running locally on the tablet. Apraxia World offers two advantages over current commercial and research speech therapy games; first, the game provides extended gameplay to support long therapy treatments; second, it affords some therapy practice independence via automatic pronunciation evaluation, allowing caregivers to lightly supervise instead of directly administer the practice. Pilot testing indicated that children enjoyed the game-based therapy much more than traditional practice and that the exercises did not interfere with gameplay. During a longitudinal study, children made clinically-significant pronunciation improvements while playing Apraxia World at home. Furthermore, children remained engaged in the game-based therapy over the two-month testing period and some even wanted to continue playing post-study. The second part of the dissertation explores word- and phoneme-level pronunciation verification for child speech therapy applications. Word-level pronunciation verification is accomplished using a child-specific template-matching framework, where an utterance is compared against correctly and incorrectly pronounced examples of the word. This framework identified mispronounced words better than both a standard automated baseline and co-located caregivers. Phoneme-level mispronunciation detection is investigated using a technique from the second-language learning literature: training phoneme-specific classifiers with phonetic posterior features. This method also outperformed the standard baseline, but more significantly, identified mispronunciations better than student clinicians

    Towards Automatic Speech-Language Assessment for Aphasia Rehabilitation

    Full text link
    Speech-based technology has the potential to reinforce traditional aphasia therapy through the development of automatic speech-language assessment systems. Such systems can provide clinicians with supplementary information to assist with progress monitoring and treatment planning, and can provide support for on-demand auxiliary treatment. However, current technology cannot support this type of application due to the difficulties associated with aphasic speech processing. The focus of this dissertation is on the development of computational methods that can accurately assess aphasic speech across a range of clinically-relevant dimensions. The first part of the dissertation focuses on novel techniques for assessing aphasic speech intelligibility in constrained contexts. The second part investigates acoustic modeling methods that lead to significant improvement in aphasic speech recognition and allow the system to work with unconstrained speech samples. The final part demonstrates the efficacy of speech recognition-based analysis in automatic paraphasia detection, extraction of clinically-motivated quantitative measures, and estimation of aphasia severity. The methods and results presented in this work will enable robust technologies for accurately recognizing and assessing aphasic speech, and will provide insights into the link between computational methods and clinical understanding of aphasia.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/140840/1/ducle_1.pd
    • โ€ฆ
    corecore