163 research outputs found

    MISPRONUNCIATION DETECTION AND DIAGNOSIS IN MANDARIN ACCENTED ENGLISH SPEECH

    Get PDF
    This work presents the development, implementation, and evaluation of a Mispronunciation Detection and Diagnosis (MDD) system, with application to pronunciation evaluation of Mandarin-accented English speech. A comprehensive detection and diagnosis of errors in the Electromagnetic Articulography corpus of Mandarin-Accented English (EMA-MAE) was performed by using the expert phonetic transcripts and an Automatic Speech Recognition (ASR) system. Articulatory features derived from the parallel kinematic data available in the EMA-MAE corpus were used to identify the most significant articulatory error patterns seen in L2 speakers during common mispronunciations. Using both acoustic and articulatory information, an ASR based Mispronunciation Detection and Diagnosis (MDD) system was built and evaluated across different feature combinations and Deep Neural Network (DNN) architectures. The MDD system captured mispronunciation errors with a detection accuracy of 82.4%, a diagnostic accuracy of 75.8% and a false rejection rate of 17.2%. The results demonstrate the advantage of using articulatory features in revealing the significant contributors of mispronunciation as well as improving the performance of MDD systems

    CAPTλ₯Ό μœ„ν•œ 발음 변이 뢄석 및 CycleGAN 기반 ν”Όλ“œλ°± 생성

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사)--μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› :μΈλ¬ΈλŒ€ν•™ ν˜‘λ™κ³Όμ • 인지과학전곡,2020. 2. μ •λ―Όν™”.Despite the growing popularity in learning Korean as a foreign language and the rapid development in language learning applications, the existing computer-assisted pronunciation training (CAPT) systems in Korean do not utilize linguistic characteristics of non-native Korean speech. Pronunciation variations in non-native speech are far more diverse than those observed in native speech, which may pose a difficulty in combining such knowledge in an automatic system. Moreover, most of the existing methods rely on feature extraction results from signal processing, prosodic analysis, and natural language processing techniques. Such methods entail limitations since they necessarily depend on finding the right features for the task and the extraction accuracies. This thesis presents a new approach for corrective feedback generation in a CAPT system, in which pronunciation variation patterns and linguistic correlates with accentedness are analyzed and combined with a deep neural network approach, so that feature engineering efforts are minimized while maintaining the linguistically important factors for the corrective feedback generation task. Investigations on non-native Korean speech characteristics in contrast with those of native speakers, and their correlation with accentedness judgement show that both segmental and prosodic variations are important factors in a Korean CAPT system. The present thesis argues that the feedback generation task can be interpreted as a style transfer problem, and proposes to evaluate the idea using generative adversarial network. A corrective feedback generation model is trained on 65,100 read utterances by 217 non-native speakers of 27 mother tongue backgrounds. The features are automatically learnt in an unsupervised way in an auxiliary classifier CycleGAN setting, in which the generator learns to map a foreign accented speech to native speech distributions. In order to inject linguistic knowledge into the network, an auxiliary classifier is trained so that the feedback also identifies the linguistic error types that were defined in the first half of the thesis. The proposed approach generates a corrected version the speech using the learners own voice, outperforming the conventional Pitch-Synchronous Overlap-and-Add method.μ™Έκ΅­μ–΄λ‘œμ„œμ˜ ν•œκ΅­μ–΄ κ΅μœ‘μ— λŒ€ν•œ 관심이 κ³ μ‘°λ˜μ–΄ ν•œκ΅­μ–΄ ν•™μŠ΅μžμ˜ μˆ˜κ°€ 크게 μ¦κ°€ν•˜κ³  있으며, μŒμ„±μ–Έμ–΄μ²˜λ¦¬ κΈ°μˆ μ„ μ μš©ν•œ 컴퓨터 기반 발음 ꡐ윑(Computer-Assisted Pronunciation Training; CAPT) μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ— λŒ€ν•œ 연ꡬ λ˜ν•œ 적극적으둜 이루어지고 μžˆλ‹€. κ·ΈλŸΌμ—λ„ λΆˆκ΅¬ν•˜κ³  ν˜„μ‘΄ν•˜λŠ” ν•œκ΅­μ–΄ λ§ν•˜κΈ° ꡐ윑 μ‹œμŠ€ν…œμ€ μ™Έκ΅­μΈμ˜ ν•œκ΅­μ–΄μ— λŒ€ν•œ 언어학적 νŠΉμ§•μ„ μΆ©λΆ„νžˆ ν™œμš©ν•˜μ§€ μ•Šκ³  있으며, μ΅œμ‹  μ–Έμ–΄μ²˜λ¦¬ 기술 λ˜ν•œ μ μš©λ˜μ§€ μ•Šκ³  μžˆλŠ” 싀정이닀. κ°€λŠ₯ν•œ μ›μΈμœΌλ‘œμ¨λŠ” 외ꡭ인 λ°œν™” ν•œκ΅­μ–΄ ν˜„μƒμ— λŒ€ν•œ 뢄석이 μΆ©λΆ„ν•˜κ²Œ 이루어지지 μ•Šμ•˜λ‹€λŠ” 점, 그리고 κ΄€λ ¨ 연ꡬ가 μžˆμ–΄λ„ 이λ₯Ό μžλ™ν™”λœ μ‹œμŠ€ν…œμ— λ°˜μ˜ν•˜κΈ°μ—λŠ” κ³ λ„ν™”λœ 연ꡬ가 ν•„μš”ν•˜λ‹€λŠ” 점이 μžˆλ‹€. 뿐만 μ•„λ‹ˆλΌ CAPT 기술 μ „λ°˜μ μœΌλ‘œλŠ” μ‹ ν˜Έμ²˜λ¦¬, 운율 뢄석, μžμ—°μ–΄μ²˜λ¦¬ 기법과 같은 νŠΉμ§• μΆ”μΆœμ— μ˜μ‘΄ν•˜κ³  μžˆμ–΄μ„œ μ ν•©ν•œ νŠΉμ§•μ„ μ°Ύκ³  이λ₯Ό μ •ν™•ν•˜κ²Œ μΆ”μΆœν•˜λŠ” 데에 λ§Žμ€ μ‹œκ°„κ³Ό λ…Έλ ₯이 ν•„μš”ν•œ 싀정이닀. μ΄λŠ” μ΅œμ‹  λ”₯λŸ¬λ‹ 기반 μ–Έμ–΄μ²˜λ¦¬ κΈ°μˆ μ„ ν™œμš©ν•¨μœΌλ‘œμ¨ 이 κ³Όμ • λ˜ν•œ λ°œμ „μ˜ 여지가 λ§Žλ‹€λŠ” λ°”λ₯Ό μ‹œμ‚¬ν•œλ‹€. λ”°λΌμ„œ λ³Έ μ—°κ΅¬λŠ” λ¨Όμ € CAPT μ‹œμŠ€ν…œ κ°œλ°œμ— μžˆμ–΄ 발음 변이 양상과 언어학적 상관관계λ₯Ό λΆ„μ„ν•˜μ˜€λ‹€. 외ꡭ인 ν™”μžλ“€μ˜ 낭독체 변이 양상과 ν•œκ΅­μ–΄ 원어민 ν™”μžλ“€μ˜ 낭독체 변이 양상을 λŒ€μ‘°ν•˜κ³  μ£Όμš”ν•œ 변이λ₯Ό ν™•μΈν•œ ν›„, 상관관계 뢄석을 ν†΅ν•˜μ—¬ μ˜μ‚¬μ†Œν†΅μ— 영ν–₯을 λ―ΈμΉ˜λŠ” μ€‘μš”λ„λ₯Ό νŒŒμ•…ν•˜μ˜€λ‹€. κ·Έ κ²°κ³Ό, μ’…μ„± μ‚­μ œμ™€ 3쀑 λŒ€λ¦½μ˜ ν˜Όλ™, μ΄ˆλΆ„μ ˆ κ΄€λ ¨ 였λ₯˜κ°€ λ°œμƒν•  경우 ν”Όλ“œλ°± 생성에 μš°μ„ μ μœΌλ‘œ λ°˜μ˜ν•˜λŠ” 것이 ν•„μš”ν•˜λ‹€λŠ” 것이 ν™•μΈλ˜μ—ˆλ‹€. κ΅μ •λœ ν”Όλ“œλ°±μ„ μžλ™μœΌλ‘œ μƒμ„±ν•˜λŠ” 것은 CAPT μ‹œμŠ€ν…œμ˜ μ€‘μš”ν•œ 과제 쀑 ν•˜λ‚˜μ΄λ‹€. λ³Έ μ—°κ΅¬λŠ” 이 κ³Όμ œκ°€ λ°œν™”μ˜ μŠ€νƒ€μΌ λ³€ν™”μ˜ 문제둜 해석이 κ°€λŠ₯ν•˜λ‹€κ³  λ³΄μ•˜μœΌλ©°, 생성적 μ λŒ€ 신경망 (Cycle-consistent Generative Adversarial Network; CycleGAN) κ΅¬μ‘°μ—μ„œ λͺ¨λΈλ§ν•˜λŠ” 것을 μ œμ•ˆν•˜μ˜€λ‹€. GAN λ„€νŠΈμ›Œν¬μ˜ 생성λͺ¨λΈμ€ 비원어민 λ°œν™”μ˜ 뢄포와 원어민 λ°œν™” λΆ„ν¬μ˜ 맀핑을 ν•™μŠ΅ν•˜λ©°, Cycle consistency μ†μ‹€ν•¨μˆ˜λ₯Ό μ‚¬μš©ν•¨μœΌλ‘œμ¨ λ°œν™”κ°„ μ „λ°˜μ μΈ ꡬ쑰λ₯Ό μœ μ§€ν•¨κ³Ό λ™μ‹œμ— κ³Όλ„ν•œ ꡐ정을 λ°©μ§€ν•˜μ˜€λ‹€. λ³„λ„μ˜ νŠΉμ§• μΆ”μΆœ 과정이 없이 ν•„μš”ν•œ νŠΉμ§•λ“€μ΄ CycleGAN ν”„λ ˆμž„μ›Œν¬μ—μ„œ 무감독 λ°©λ²•μœΌλ‘œ 슀슀둜 ν•™μŠ΅λ˜λŠ” λ°©λ²•μœΌλ‘œ, μ–Έμ–΄ ν™•μž₯이 μš©μ΄ν•œ 방법이닀. 언어학적 λΆ„μ„μ—μ„œ λ“œλŸ¬λ‚œ μ£Όμš”ν•œ 변이듀 κ°„μ˜ μš°μ„ μˆœμœ„λŠ” Auxiliary Classifier CycleGAN κ΅¬μ‘°μ—μ„œ λͺ¨λΈλ§ν•˜λŠ” 것을 μ œμ•ˆν•˜μ˜€λ‹€. 이 방법은 기쑴의 CycleGAN에 지식을 μ ‘λͺ©μ‹œμΌœ ν”Όλ“œλ°± μŒμ„±μ„ 생성함과 λ™μ‹œμ— ν•΄λ‹Ή ν”Όλ“œλ°±μ΄ μ–΄λ–€ μœ ν˜•μ˜ 였λ₯˜μΈμ§€ λΆ„λ₯˜ν•˜λŠ” 문제λ₯Ό μˆ˜ν–‰ν•œλ‹€. μ΄λŠ” 도메인 지식이 ꡐ정 ν”Όλ“œλ°± 생성 λ‹¨κ³„κΉŒμ§€ μœ μ§€λ˜κ³  ν†΅μ œκ°€ κ°€λŠ₯ν•˜λ‹€λŠ” μž₯점이 μžˆλ‹€λŠ” 데에 κ·Έ μ˜μ˜κ°€ μžˆλ‹€. λ³Έ μ—°κ΅¬μ—μ„œ μ œμ•ˆν•œ 방법을 ν‰κ°€ν•˜κΈ° μœ„ν•΄μ„œ 27개의 λͺ¨κ΅­μ–΄λ₯Ό κ°–λŠ” 217λͺ…μ˜ 유의미 μ–΄νœ˜ λ°œν™” 65,100개둜 ν”Όλ“œλ°± μžλ™ 생성 λͺ¨λΈμ„ ν›ˆλ ¨ν•˜κ³ , κ°œμ„  μ—¬λΆ€ 및 정도에 λŒ€ν•œ 지각 평가λ₯Ό μˆ˜ν–‰ν•˜μ˜€λ‹€. μ œμ•ˆλœ 방법을 μ‚¬μš©ν•˜μ˜€μ„ λ•Œ ν•™μŠ΅μž 본인의 λͺ©μ†Œλ¦¬λ₯Ό μœ μ§€ν•œ 채 κ΅μ •λœ 발음으둜 λ³€ν™˜ν•˜λŠ” 것이 κ°€λŠ₯ν•˜λ©°, 전톡적인 방법인 μŒλ†’μ΄ 동기식 쀑첩가산 (Pitch-Synchronous Overlap-and-Add) μ•Œκ³ λ¦¬μ¦˜μ„ μ‚¬μš©ν•˜λŠ” 방법에 λΉ„ν•΄ μƒλŒ€ κ°œμ„ λ₯  16.67%이 ν™•μΈλ˜μ—ˆλ‹€.Chapter 1. Introduction 1 1.1. Motivation 1 1.1.1. An Overview of CAPT Systems 3 1.1.2. Survey of existing Korean CAPT Systems 5 1.2. Problem Statement 7 1.3. Thesis Structure 7 Chapter 2. Pronunciation Analysis of Korean Produced by Chinese 9 2.1. Comparison between Korean and Chinese 11 2.1.1. Phonetic and Syllable Structure Comparisons 11 2.1.2. Phonological Comparisons 14 2.2. Related Works 16 2.3. Proposed Analysis Method 19 2.3.1. Corpus 19 2.3.2. Transcribers and Agreement Rates 22 2.4. Salient Pronunciation Variations 22 2.4.1. Segmental Variation Patterns 22 2.4.1.1. Discussions 25 2.4.2. Phonological Variation Patterns 26 2.4.1.2. Discussions 27 2.5. Summary 29 Chapter 3. Correlation Analysis of Pronunciation Variations and Human Evaluation 30 3.1. Related Works 31 3.1.1. Criteria used in L2 Speech 31 3.1.2. Criteria used in L2 Korean Speech 32 3.2. Proposed Human Evaluation Method 36 3.2.1. Reading Prompt Design 36 3.2.2. Evaluation Criteria Design 37 3.2.3. Raters and Agreement Rates 40 3.3. Linguistic Factors Affecting L2 Korean Accentedness 41 3.3.1. Pearsons Correlation Analysis 41 3.3.2. Discussions 42 3.3.3. Implications for Automatic Feedback Generation 44 3.4. Summary 45 Chapter 4. Corrective Feedback Generation for CAPT 46 4.1. Related Works 46 4.1.1. Prosody Transplantation 47 4.1.2. Recent Speech Conversion Methods 49 4.1.3. Evaluation of Corrective Feedback 50 4.2. Proposed Method: Corrective Feedback as a Style Transfer 51 4.2.1. Speech Analysis at Spectral Domain 53 4.2.2. Self-imitative Learning 55 4.2.3. An Analogy: CAPT System and GAN Architecture 57 4.3. Generative Adversarial Networks 59 4.3.1. Conditional GAN 61 4.3.2. CycleGAN 62 4.4. Experiment 63 4.4.1. Corpus 64 4.4.2. Baseline Implementation 65 4.4.3. Adversarial Training Implementation 65 4.4.4. Spectrogram-to-Spectrogram Training 66 4.5. Results and Evaluation 69 4.5.1. Spectrogram Generation Results 69 4.5.2. Perceptual Evaluation 70 4.5.3. Discussions 72 4.6. Summary 74 Chapter 5. Integration of Linguistic Knowledge in an Auxiliary Classifier CycleGAN for Feedback Generation 75 5.1. Linguistic Class Selection 75 5.2. Auxiliary Classifier CycleGAN Design 77 5.3. Experiment and Results 80 5.3.1. Corpus 80 5.3.2. Feature Annotations 81 5.3.3. Experiment Setup 81 5.3.4. Results 82 5.4. Summary 84 Chapter 6. Conclusion 86 6.1. Thesis Results 86 6.2. Thesis Contributions 88 6.3. Recommendations for Future Work 89 Bibliography 91 Appendix 107 Abstract in Korean 117 Acknowledgments 120Docto

    Automatic Screening of Childhood Speech Sound Disorders and Detection of Associated Pronunciation Errors

    Full text link
    Speech disorders in children can affect their fluency and intelligibility. Delay in their diagnosis and treatment increases the risk of social impairment and learning disabilities. With the significant shortage of Speech and Language Pathologists (SLPs), there is an increasing interest in Computer-Aided Speech Therapy tools with automatic detection and diagnosis capability. However, the scarcity and unreliable annotation of disordered child speech corpora along with the high acoustic variations in the child speech data has impeded the development of reliable automatic detection and diagnosis of childhood speech sound disorders. Therefore, this thesis investigates two types of detection systems that can be achieved with minimum dependency on annotated mispronounced speech data. First, a novel approach that adopts paralinguistic features which represent the prosodic, spectral, and voice quality characteristics of the speech was proposed to perform segment- and subject-level classification of Typically Developing (TD) and Speech Sound Disordered (SSD) child speech using a binary Support Vector Machine (SVM) classifier. As paralinguistic features are both language- and content-independent, they can be extracted from an unannotated speech signal. Second, a novel Mispronunciation Detection and Diagnosis (MDD) approach was introduced to detect the pronunciation errors made due to SSDs and provide low-level diagnostic information that can be used in constructing formative feedback and a detailed diagnostic report. Unlike existing MDD methods where detection and diagnosis are performed at the phoneme level, the proposed method achieved MDD at the speech attribute level, namely the manners and places of articulations. The speech attribute features describe the involved articulators and their interactions when making a speech sound allowing a low-level description of the pronunciation error to be provided. Two novel methods to model speech attributes are further proposed in this thesis, a frame-based (phoneme-alignment) method leveraging the Multi-Task Learning (MTL) criterion and training a separate model for each attribute, and an alignment-free jointly-learnt method based on the Connectionist Temporal Classification (CTC) sequence to sequence criterion. The proposed techniques have been evaluated using standard and publicly accessible adult and child speech corpora, while the MDD method has been validated using L2 speech corpora

    Apraxia World: Deploying a Mobile Game and Automatic Speech Recognition for Independent Child Speech Therapy

    Get PDF
    Children with speech sound disorders typically improve pronunciation quality by undergoing speech therapy, which must be delivered frequently and with high intensity to be effective. As such, clinic sessions are supplemented with home practice, often under caregiver supervision. However, traditional home practice can grow boring for children due to monotony. Furthermore, practice frequency is limited by caregiver availability, making it difficult for some children to reach therapy dosage. To address these issues, this dissertation presents a novel speech therapy game to increase engagement, and explores automatic pronunciation evaluation techniques to afford children independent practice. Children with speech sound disorders typically improve pronunciation quality by undergoing speech therapy, which must be delivered frequently and with high intensity to be effective. As such, clinic sessions are supplemented with home practice, often under caregiver supervision. However, traditional home practice can grow boring for children due to monotony. Furthermore, practice frequency is limited by caregiver availability, making it difficult for some children to reach therapy dosage. To address these issues, this dissertation presents a novel speech therapy game to increase engagement, and explores automatic pronunciation evaluation techniques to afford children independent practice. The therapy game, called Apraxia World, delivers customizable, repetition-based speech therapy while children play through platformer-style levels using typical on-screen tablet controls; children complete in-game speech exercises to collect assets required to progress through the levels. Additionally, Apraxia World provides pronunciation feedback according to an automated pronunciation evaluation system running locally on the tablet. Apraxia World offers two advantages over current commercial and research speech therapy games; first, the game provides extended gameplay to support long therapy treatments; second, it affords some therapy practice independence via automatic pronunciation evaluation, allowing caregivers to lightly supervise instead of directly administer the practice. Pilot testing indicated that children enjoyed the game-based therapy much more than traditional practice and that the exercises did not interfere with gameplay. During a longitudinal study, children made clinically-significant pronunciation improvements while playing Apraxia World at home. Furthermore, children remained engaged in the game-based therapy over the two-month testing period and some even wanted to continue playing post-study. The second part of the dissertation explores word- and phoneme-level pronunciation verification for child speech therapy applications. Word-level pronunciation verification is accomplished using a child-specific template-matching framework, where an utterance is compared against correctly and incorrectly pronounced examples of the word. This framework identified mispronounced words better than both a standard automated baseline and co-located caregivers. Phoneme-level mispronunciation detection is investigated using a technique from the second-language learning literature: training phoneme-specific classifiers with phonetic posterior features. This method also outperformed the standard baseline, but more significantly, identified mispronunciations better than student clinicians

    Recent development of the HMM-based speech synthesis system (HTS)

    Get PDF
    A statistical parametric approach to speech synthesis based on hidden Markov models (HMMs) has grown in popularity over the last few years. In this approach, spectrum, excitation, and duration of speech are simultaneously modeled by context-dependent HMMs, and speech waveforms are generate from the HMMs themselves. Since December 2002, we have publicly released an open-source software toolkit named β€œHMM-based speech synthesis system (HTS)” to provide a research and development toolkit for statistical parametric speech synthesis. This paper describes recent developments of HTS in detail, as well as future release plans

    Directions for the future of technology in pronunciation research and teaching

    Get PDF
    This paper reports on the role of technology in state-of-the-art pronunciation research and instruction, and makes concrete suggestions for future developments. The point of departure for this contribution is that the goal of second language (L2) pronunciation research and teaching should be enhanced comprehensibility and intelligibility as opposed to native-likeness. Three main areas are covered here. We begin with a presentation of advanced uses of pronunciation technology in research with a special focus on the expertise required to carry out even small-scale investigations. Next, we discuss the nature of data in pronunciation research, pointing to ways in which future work can build on advances in corpus research and crowdsourcing. Finally, we consider how these insights pave the way for researchers and developers working to create research-informed, computer-assisted pronunciation teaching resources. We conclude with predictions for future developments

    Methods for pronunciation assessment in computer aided language learning

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 149-176).Learning a foreign language is a challenging endeavor that entails acquiring a wide range of new knowledge including words, grammar, gestures, sounds, etc. Mastering these skills all require extensive practice by the learner and opportunities may not always be available. Computer Aided Language Learning (CALL) systems provide non-threatening environments where foreign language skills can be practiced where ever and whenever a student desires. These systems often have several technologies to identify the different types of errors made by a student. This thesis focuses on the problem of identifying mispronunciations made by a foreign language student using a CALL system. We make several assumptions about the nature of the learning activity: it takes place using a dialogue system, it is a task- or game-oriented activity, the student should not be interrupted by the pronunciation feedback system, and that the goal of the feedback system is to identify severe mispronunciations with high reliability. Detecting mispronunciations requires a corpus of speech with human judgements of pronunciation quality. Typical approaches to collecting such a corpus use an expert phonetician to both phonetically transcribe and assign judgements of quality to each phone in a corpus. This is time consuming and expensive. It also places an extra burden on the transcriber. We describe a novel method for obtaining phone level judgements of pronunciation quality by utilizing non-expert, crowd-sourced, word level judgements of pronunciation. Foreign language learners typically exhibit high variation and pronunciation shapes distinct from native speakers that make analysis for mispronunciation difficult. We detail a simple, but effective method for transforming the vowel space of non-native speakers to make mispronunciation detection more robust and accurate. We show that this transformation not only enhances performance on a simple classification task, but also results in distributions that can be better exploited for mispronunciation detection. This transformation of the vowel is exploited to train a mispronunciation detector using a variety of features derived from acoustic model scores and vowel class distributions. We confirm that the transformation technique results in a more robust and accurate identification of mispronunciations than traditional acoustic models.by Mitchell A. Peabody.Ph.D

    An automated lexical stress classification tool for assessing dysprosody in childhood apraxia of speech

    Get PDF
    Childhood apraxia of speech (CAS) commonly affects the production of lexical stress contrast in polysyllabic words. Automated classification tools have the potential to increase reliability and efficiency in measuring lexical stress. Here, factors affecting the accuracy of a custom-built deep neural network (DNN)-based classification tool are evaluated. Sixteen children with typical development (TD) and 26 with CAS produced 50 polysyllabic words. Words with strong–weak (SW, e.g., dinosaur) or WS (e.g., banana) stress were fed to the classification tool, and the accuracy measured (a) against expert judgment, (b) for speaker group, and (c) with/without prior knowledge of phonemic errors in the sample. The influence of segmental features and participant factors on tool accuracy was analysed. Linear mixed modelling showed significant interaction between group and stress type, surviving adjustment for age and CAS severity. For TD, agreement for SW and WS words was >80%, but CAS speech was higher for SW (>80%) than WS (~60%). Prior knowledge of segmental errors conferred no clear advantage. Automatic lexical stress classification shows promise for identifying errors in children’s speech at diagnosis or with treatment-related change, but accuracy for WS words in apraxic speech needs improvement. Further training of algorithms using larger sets of labelled data containing impaired speech and WS words may increase accuracy
    • …
    corecore