1,636 research outputs found

    Polyglot voice design for unit selection speech synthesis

    Get PDF
    Current text-to-speech (TTS) systems are increasingly faced with mixed language textual input. Most TTS systems are designed to allow building synthetic voices for different languages, but each voice is able to ”speak” only one language at a time. In order to synthesize mixed language input, polyglot voices are needed which are able to switch between languages when it is required by textual input. A polyglot voice will typically have one basic language and additionally the ability to synthesize foreign words when these are encountered in the textual input. Design of polyglot voices for unit selection speech synthesis is still a research question. An inherent problem of unit selection speech synthesis is that the synthesis quality is closely related to the contents of the unit database. Concatenation of units not in the database usually results in bad synthesis quality. At the same time, building the database with good coverage of units results in a prohibitively large database if the intended domain of synthesized text is unlimited. Polyglot databases have an additional problem that not only single language units have to be stored in the database, but also the concatenation points of words from foreign languages have to be accounted for. This exceeds the database size even more, so that it is worth exploring whether database size can be reduced by including only single language units in the database and handling multilingual units on synthesis time. The present work is concerned with database design for a polyglot unit selection voice. It’s main aim is to examine whether alternative methods for handling multilingual cross-word diphones result in same or better synthesis quality than including these diphones in the database. Three alternative approaches are suggested and model polyglot voices are built to test these methods. The languages included in the synthesizer are Bosnian, English and German. The output quality of the synthesized multilingual word boundary is tested on Bosnian-English and Bosnian-German word pairs in a perceptual experiment

    Phonetics of segmental FO and machine recognition of Korean speech

    Get PDF

    Post-editing machine translated text in a commercial setting: Observation and statistical analysis

    Get PDF
    Machine translation systems, when they are used in a commercial context for publishing purposes, are usually used in combination with human post-editing. Thus understanding human post-editing behaviour is crucial in order to maximise the benefit of machine translation systems. Though there have been a number of studies carried out on human post-editing to date, there is a lack of large-scale studies on post-editing in industrial contexts which focus on the activity in real-life settings. This study observes professional Japanese post-editors’ work and examines the effect of the amount of editing made during post-editing, source text characteristics, and post-editing behaviour, on the amount of post-editing effort. A mixed method approach was employed to both quantitatively and qualitatively analyse the data and gain detailed insights into the post-editing activity from various view points. The results indicate that a number of factors, such as sentence structure, document component types, use of product specific terms, and post-editing patterns and behaviour, have effect on the amount of post-editing effort in an intertwined manner. The findings will contribute to a better utilisation of machine translation systems in the industry as well as the development of the skills and strategies of post-editors

    Robust Speech Recognition for Adverse Environments

    Get PDF

    CAPT를 위한 발음 변이 분석 및 CycleGAN 기반 피드백 생성

    Get PDF
    학위논문(박사)--서울대학교 대학원 :인문대학 협동과정 인지과학전공,2020. 2. 정민화.Despite the growing popularity in learning Korean as a foreign language and the rapid development in language learning applications, the existing computer-assisted pronunciation training (CAPT) systems in Korean do not utilize linguistic characteristics of non-native Korean speech. Pronunciation variations in non-native speech are far more diverse than those observed in native speech, which may pose a difficulty in combining such knowledge in an automatic system. Moreover, most of the existing methods rely on feature extraction results from signal processing, prosodic analysis, and natural language processing techniques. Such methods entail limitations since they necessarily depend on finding the right features for the task and the extraction accuracies. This thesis presents a new approach for corrective feedback generation in a CAPT system, in which pronunciation variation patterns and linguistic correlates with accentedness are analyzed and combined with a deep neural network approach, so that feature engineering efforts are minimized while maintaining the linguistically important factors for the corrective feedback generation task. Investigations on non-native Korean speech characteristics in contrast with those of native speakers, and their correlation with accentedness judgement show that both segmental and prosodic variations are important factors in a Korean CAPT system. The present thesis argues that the feedback generation task can be interpreted as a style transfer problem, and proposes to evaluate the idea using generative adversarial network. A corrective feedback generation model is trained on 65,100 read utterances by 217 non-native speakers of 27 mother tongue backgrounds. The features are automatically learnt in an unsupervised way in an auxiliary classifier CycleGAN setting, in which the generator learns to map a foreign accented speech to native speech distributions. In order to inject linguistic knowledge into the network, an auxiliary classifier is trained so that the feedback also identifies the linguistic error types that were defined in the first half of the thesis. The proposed approach generates a corrected version the speech using the learners own voice, outperforming the conventional Pitch-Synchronous Overlap-and-Add method.외국어로서의 한국어 교육에 대한 관심이 고조되어 한국어 학습자의 수가 크게 증가하고 있으며, 음성언어처리 기술을 적용한 컴퓨터 기반 발음 교육(Computer-Assisted Pronunciation Training; CAPT) 어플리케이션에 대한 연구 또한 적극적으로 이루어지고 있다. 그럼에도 불구하고 현존하는 한국어 말하기 교육 시스템은 외국인의 한국어에 대한 언어학적 특징을 충분히 활용하지 않고 있으며, 최신 언어처리 기술 또한 적용되지 않고 있는 실정이다. 가능한 원인으로써는 외국인 발화 한국어 현상에 대한 분석이 충분하게 이루어지지 않았다는 점, 그리고 관련 연구가 있어도 이를 자동화된 시스템에 반영하기에는 고도화된 연구가 필요하다는 점이 있다. 뿐만 아니라 CAPT 기술 전반적으로는 신호처리, 운율 분석, 자연어처리 기법과 같은 특징 추출에 의존하고 있어서 적합한 특징을 찾고 이를 정확하게 추출하는 데에 많은 시간과 노력이 필요한 실정이다. 이는 최신 딥러닝 기반 언어처리 기술을 활용함으로써 이 과정 또한 발전의 여지가 많다는 바를 시사한다. 따라서 본 연구는 먼저 CAPT 시스템 개발에 있어 발음 변이 양상과 언어학적 상관관계를 분석하였다. 외국인 화자들의 낭독체 변이 양상과 한국어 원어민 화자들의 낭독체 변이 양상을 대조하고 주요한 변이를 확인한 후, 상관관계 분석을 통하여 의사소통에 영향을 미치는 중요도를 파악하였다. 그 결과, 종성 삭제와 3중 대립의 혼동, 초분절 관련 오류가 발생할 경우 피드백 생성에 우선적으로 반영하는 것이 필요하다는 것이 확인되었다. 교정된 피드백을 자동으로 생성하는 것은 CAPT 시스템의 중요한 과제 중 하나이다. 본 연구는 이 과제가 발화의 스타일 변화의 문제로 해석이 가능하다고 보았으며, 생성적 적대 신경망 (Cycle-consistent Generative Adversarial Network; CycleGAN) 구조에서 모델링하는 것을 제안하였다. GAN 네트워크의 생성모델은 비원어민 발화의 분포와 원어민 발화 분포의 매핑을 학습하며, Cycle consistency 손실함수를 사용함으로써 발화간 전반적인 구조를 유지함과 동시에 과도한 교정을 방지하였다. 별도의 특징 추출 과정이 없이 필요한 특징들이 CycleGAN 프레임워크에서 무감독 방법으로 스스로 학습되는 방법으로, 언어 확장이 용이한 방법이다. 언어학적 분석에서 드러난 주요한 변이들 간의 우선순위는 Auxiliary Classifier CycleGAN 구조에서 모델링하는 것을 제안하였다. 이 방법은 기존의 CycleGAN에 지식을 접목시켜 피드백 음성을 생성함과 동시에 해당 피드백이 어떤 유형의 오류인지 분류하는 문제를 수행한다. 이는 도메인 지식이 교정 피드백 생성 단계까지 유지되고 통제가 가능하다는 장점이 있다는 데에 그 의의가 있다. 본 연구에서 제안한 방법을 평가하기 위해서 27개의 모국어를 갖는 217명의 유의미 어휘 발화 65,100개로 피드백 자동 생성 모델을 훈련하고, 개선 여부 및 정도에 대한 지각 평가를 수행하였다. 제안된 방법을 사용하였을 때 학습자 본인의 목소리를 유지한 채 교정된 발음으로 변환하는 것이 가능하며, 전통적인 방법인 음높이 동기식 중첩가산 (Pitch-Synchronous Overlap-and-Add) 알고리즘을 사용하는 방법에 비해 상대 개선률 16.67%이 확인되었다.Chapter 1. Introduction 1 1.1. Motivation 1 1.1.1. An Overview of CAPT Systems 3 1.1.2. Survey of existing Korean CAPT Systems 5 1.2. Problem Statement 7 1.3. Thesis Structure 7 Chapter 2. Pronunciation Analysis of Korean Produced by Chinese 9 2.1. Comparison between Korean and Chinese 11 2.1.1. Phonetic and Syllable Structure Comparisons 11 2.1.2. Phonological Comparisons 14 2.2. Related Works 16 2.3. Proposed Analysis Method 19 2.3.1. Corpus 19 2.3.2. Transcribers and Agreement Rates 22 2.4. Salient Pronunciation Variations 22 2.4.1. Segmental Variation Patterns 22 2.4.1.1. Discussions 25 2.4.2. Phonological Variation Patterns 26 2.4.1.2. Discussions 27 2.5. Summary 29 Chapter 3. Correlation Analysis of Pronunciation Variations and Human Evaluation 30 3.1. Related Works 31 3.1.1. Criteria used in L2 Speech 31 3.1.2. Criteria used in L2 Korean Speech 32 3.2. Proposed Human Evaluation Method 36 3.2.1. Reading Prompt Design 36 3.2.2. Evaluation Criteria Design 37 3.2.3. Raters and Agreement Rates 40 3.3. Linguistic Factors Affecting L2 Korean Accentedness 41 3.3.1. Pearsons Correlation Analysis 41 3.3.2. Discussions 42 3.3.3. Implications for Automatic Feedback Generation 44 3.4. Summary 45 Chapter 4. Corrective Feedback Generation for CAPT 46 4.1. Related Works 46 4.1.1. Prosody Transplantation 47 4.1.2. Recent Speech Conversion Methods 49 4.1.3. Evaluation of Corrective Feedback 50 4.2. Proposed Method: Corrective Feedback as a Style Transfer 51 4.2.1. Speech Analysis at Spectral Domain 53 4.2.2. Self-imitative Learning 55 4.2.3. An Analogy: CAPT System and GAN Architecture 57 4.3. Generative Adversarial Networks 59 4.3.1. Conditional GAN 61 4.3.2. CycleGAN 62 4.4. Experiment 63 4.4.1. Corpus 64 4.4.2. Baseline Implementation 65 4.4.3. Adversarial Training Implementation 65 4.4.4. Spectrogram-to-Spectrogram Training 66 4.5. Results and Evaluation 69 4.5.1. Spectrogram Generation Results 69 4.5.2. Perceptual Evaluation 70 4.5.3. Discussions 72 4.6. Summary 74 Chapter 5. Integration of Linguistic Knowledge in an Auxiliary Classifier CycleGAN for Feedback Generation 75 5.1. Linguistic Class Selection 75 5.2. Auxiliary Classifier CycleGAN Design 77 5.3. Experiment and Results 80 5.3.1. Corpus 80 5.3.2. Feature Annotations 81 5.3.3. Experiment Setup 81 5.3.4. Results 82 5.4. Summary 84 Chapter 6. Conclusion 86 6.1. Thesis Results 86 6.2. Thesis Contributions 88 6.3. Recommendations for Future Work 89 Bibliography 91 Appendix 107 Abstract in Korean 117 Acknowledgments 120Docto

    The presence, nature and role of formulaic sequences in English advanced learners of French : a longitudinal study

    Get PDF
    PhD ThesisThe present study is a longitudinal investigation of the presence, nature, and role of formulaic sequences (FS) in advanced English learners of French. The learners investigated are in their second year of an undergraduate degree in French at the onset of the study, and are tested before and after a seven-month stay in France. FS are defined psycholinguistically as multiword units which present a processing advantage for a given speaker, either because they are stored whole in his/her mental lexicon (Wray 2002) or because they are highly automatised. The construct of FS is particularly relevant to investigate key linguistic issues such as the dynamism of linguistic representations, their idiosyncratic nature as well as the relationship between the lexicon and grammar. FS have been shown to be frequent in the oral productions of native speakers. They also play an important role in first language acquisition as well as in the initial stages of instructed second language (L2) acquisition. However, very little is known about their presence and role in advanced L2 learners, as most studies dealing with them have not adopted a psycholinguistic approach and have focused on L2 learners’ knowledge and use of idioms and idiomatic expressions. Conversely, this study seeks to evaluate and characterise the presence of psycholinguistically-defined FS in advanced learners as well as examine their longitudinal development in relation to the development of the learners’ fluency and lexical diversity. It seeks to determine whether FS use can be said to play a role in the development of fluency and lexical diversity and if it does, describe the underlying mechanisms that account for this role. Data from five learners performing five oral tasks (an interview, a story retell and 3 discussion tasks), repeated before and after their stay in France, was elicited and transcribed. FS were identified through the hierarchical application of a range of criteria aiming to capture the holistic nature of the sequences. The necessary criterion used for identification was fluent pronunciation of the sequence, and additional criteria were applied such as irregularity, holistic mapping of form to meaning or holistic status of the sequence in the input. Fluency was operationalised through 4 measures (phonation-time ratio, speaking rate, mean length of runs and articulation rate) and lexical diversity was measured using D. The results show that psycholinguistically-defined FS represent about 27% of the language of advanced learners of French. The typology of the identified sequences shows that they are mostly grammatically regular but that despite the advanced level of the participants, some present non-nativelike characteristics. Individual differences in the learners’ repertoires of FS as well as task effects are also found. Between time 1 and time 2, across the group of 5 subjects, there is a general and statistically significant increase in FS use, fluency and lexical diversity. Significant correlations are found between FS use, fluency and lexical diversity. The qualitative analysis suggests that FS use plays a role in increasing fluency by allowing longer speech runs, contributing to the reduction of pausing time as well as the speeding up of the articulation rate. At the internal level of processing mechanisms, the results suggest that FS play a facilitating role not only in the formulation stage of speech production but also in the conceptualisation and articulation stages. Significant correlations are also found between FS use and lexical diversity, which suggests that FS, by lightening the processing burden and freeing some attentional resources, might facilitate the acquisition of new vocabulary. The analysis of the development of the learners across all variables shows a single developmental path with similar processes of automatisation but with different rates of acquisition, as the learners vary in how efficient they are at proceduralising their language. Because of this, it is suggested that the year abroad is more likely to be beneficial for a given subject if their language has already reached a certain level of automatisation pre-time abroad.Arts and Humanities Research Counci

    Mechanisms of vowel devoicing in Japanese

    Get PDF
    The processes of vowel devoicing in Standard Japanese were examined with respect to the phonetic and phonological environments and the syllable structure of Japanese, in comparison with vowel reduction processes in other languages, in most of which vowel reduction occurs optionally in fast or casual speech. This thesis examined whether Japanese vowel devoicing was a phonetic phenomenon caused by glottal assimilation between a high vowel and its adjacent voiceless consonants, or it was a more phonologically controlled compulsory process. Experimental results showed that Japanese high vowel devoicing must be analysed separately in two devoicing conditions, namely single and consecutive devoicing environments. Devoicing was almost compulsory regardless of the presence of proposed blocking factors such as type of preceding consonant, accentuation, position in an utterance, as long as there was no devoiceable vowel in adjacent morae (single devoicing condition). However, under consecutive devoicing conditions, blocking factors became effective and prevented some devoiceable vowels from becoming voiceless. The effect of speaking rate was also generally minimal in the single devoicing condition, but in the consecutive devoicing condition, the vowels were devoiced more at faster tempi than slower tempi, which created many examples of consecutively devoiced vowels over two morae. Durational observations found that vowel devoicing involves not only phonatory change, but also slight durational reduction. However, the shorter duration of devoiced syllables were adjusted at the word level, so that the whole duration of a word with devoiced vowels remained similar to the word without devoiced vowels, regardless of the number of devoiced vowels in the word. It must be noted that there was no clear-cut distinction between voiced and devoiced vowels, and the phonetic realisation of a devoiced vowel could vary from fully voiced to completely voiceless. A high vowel may be voiced in a typical devoicing environment, but its intensity is significantly weaker than those of vowels in a non-devoicing environment, at all speaking tempi. The mean differences of vowel intensities between these environments were generally higher at faster tempi. The results implied that even when the vowel was voiced, its production process moved in favour of devoicing. However, in consecutive devoicing conditions, this process did not always apply. When some of the devoiceable vowels were devoiced in the consecutive devoicing environment, the intensities of devoiceable vowels were not significantly lower than those of other vowels. The results of intensity measurements of voiced vowels in the devoicing and nondevoicing environments suggested that Japanese vowel devoicing was part of the overall process of complex vowel weakening, and that a completely devoiced vowel was the final state of the weakening process. Japanese vowel devoicing is primarily a process of glottal assimilation, but the results in the consecutive devoicing condition showed that this process was constrained by Japanese syllable structure

    Speech verification for computer assisted pronunciation training

    Get PDF
    Computer assisted pronunciation training (CAPT) is an approach that uses computer technology and computer-based resources in teaching and learning pronunciation. It is also part of computer assisted language learning (CALL) technology that has been widely applied to online learning platforms in the past years. This thesis deals with one of the central tasks in CAPT, i.e. speech veri- fication. The goal is to provide a framework that identifies pronunciation errors in speech data of second language (L2) learners and generates feedback with information and instruction for error correction. Furthermore, the framework is supposed to support the adaptation to new L1-L2 language pairs with minimal adjustment and modification. The central result is a novel approach to L2 speech verification, which combines both modern language technologies and linguistic expertise. For pronunciation verification, we select a set of L2 speech data, create alias phonemes from the errors annotated by linguists, then train an acoustic model with mixed L2 and gold standard data and perform HTK phoneme recognition to identify the error phonemes. For prosody verification, FD-PSOLA and Dynamic time warping are both applied to verify the differences in duration, pitch and stress. Feedback is generated for both verifications. Our feedback is presented to learners not only visually as with other existing CAPT systems, but also perceptually by synthesizing the learner’s own audio, e.g. for prosody verification, the gold standard prosody is transplanted onto the learner’s own voice. The framework is self-adaptable under semi-supervision, and requires only a certain amount of mixed gold standard and annotated L2 speech data for boot- strapping. Verified speech data is validated by linguists, annotated in case of wrong verification, and used in the next iteration of training. Mary Annotation Tool (MAT) is developed as an open-source component of MARYTTS for both annotating and validating. To deal with uncertain pauses and interruptions in L2 speech, the silence model in HTK is also adapted, and used in all components of the framework where forced alignment is required. Various evaluations are conducted that help us obtain insights into the applicability and potential of our CAPT system. The pronunciation verification shows high accuracy in both precision and recall, and encourages us to acquire more error-annotated L2 speech data to enhance the trained acoustic model. To test the effect of feedback, a progressive evaluation is carried out and it shows that our perceptual feedback helps learners realize their errors, which they could not otherwise observe from visual feedback and textual instructions. In order to im- prove the user interface, a questionnaire is also designed to collect the learners’ experiences and suggestions.Computer Assisted Pronunciation Training (CAPT) ist ein Ansatz, der mittels Computer und computergestützten Ressourcen das Erlernen der korrekten Aussprache im Fremdsprachenunterricht erleichtert. Dieser Ansatz ist ein Teil der Computer Assisted Language Learning (CALL) Technologie, die seit mehreren Jahren auf Online-Lernplattformen häufig zum Einsatz kommt. Diese Arbeit ist der Sprachverifikation gewidmet, einer der zentralen Aufgaben innerhalb des CAPT. Das Ziel ist, ein Framework zur Identifikation von Aussprachefehlern zu entwickeln fürMenschen, die eine Fremdsprache (L2-Sprache) erlernen. Dabei soll Feedback mit fehlerspezifischen Informationen und Anweisungen für eine richtige Aussprache erzeugt werden. Darüber hinaus soll das Rahmenwerk die Anpassung an neue Sprachenpaare (L1-L2) mit minimalen Adaptationen und Modifikationen unterstützen. Das zentrale Ergebnis ist ein neuartiger Ansatz für die L2-Sprachprüfung, der sowohl auf modernen Sprachtechnologien als auch auf corpuslinguistischen Ansätzen beruht. Für die Ausspracheüberprüfung erstellen wir Alias-Phoneme aus Fehlern, die von Linguisten annotiert wurden. Dann trainieren wir ein akustisches Modell mit gemischten L2- und Goldstandarddaten und führen eine HTK-Phonemerkennung3 aus, um die Fehlerphoneme zu identifizieren. Für die Prosodieüberprüfung werden sowohl FD-PSOLA4 und Dynamic Time Warping angewendet, um die Unterschiede in der Dauer, Tonhöhe und Betonung zwischen dem Gesprochenen und dem Goldstandard zu verifizieren. Feedbacks werden für beide Überprüfungen generiert und den Lernenden nicht nur visuell präsentiert, so wie in anderen vorhandenen CAPT-Systemen, sondern auch perzeptuell vorgestellt. So wird unter anderem für die Prosodieverifikation die Goldstandardprosodie auf die eigene Stimme des Lernenden übergetragen. Zur Anpassung des Frameworks an weitere L1-L2 Sprachdaten muss das System über Maschinelles Lernen trainiert werden. Da es sich um ein semi-überwachtes Lernverfahren handelt, sind nur eine gewisseMenge an gemischten Goldstandardund annotierten L2-Sprachdaten für das Bootstrapping erforderlich. Verifizierte Sprachdaten werden von Linguisten validiert, im Falle einer falschen Verifizierung nochmals annotiert, und bei der nächsten Iteration des Trainings verwendet. Für die Annotation und Validierung wurde das Mary Annotation Tool (MAT) als Open-Source-Komponente von MARYTTS entwickelt. Um mit unsicheren Pausen und Unterbrechungen in der L2-Sprache umzugehen, wurde auch das sogenannte Stillmodell in HTK angepasst und in allen Komponenten des Rahmenwerks verwendet, in denen Forced Alignment erforderlich ist. Unterschiedliche Evaluierungen wurden durchgeführt, um Erkenntnisse über die Anwendungspotenziale und die Beschränkungen des Systems zu gewinnen. Die Ausspracheüberprüfung zeigt eine hohe Genauigkeit sowohl bei der Präzision als auch beim Recall. Dadurch war es möglich weitere fehlerbehaftete L2-Sprachdaten zu verwenden, um somit das trainierte akustische Modell zu verbessern. Um die Wirkung des Feedbacks zu testen, wird eine progressive Auswertung durchgeführt. Das Ergebnis zeigt, dass perzeptive Feedbacks dabei helfen, dass die Lernenden sogar Fehler erkennen, die sie nicht aus visuellen Feedbacks und Textanweisungen beobachten können. Zudem wurden mittels Fragebogen die Erfahrungen und Anregungen der Benutzeroberfläche der Lernenden gesammelt, um das System künftig zu verbessern. 3 Hidden Markov Toolkit 4 Pitch Synchronous Overlap and Ad

    A study on reusing resources of speech synthesis for closely-related languages

    Get PDF
    This thesis describes research on building a text-to-speech (TTS) framework that can accommodate the lack of linguistic information of under-resource languages by using existing resources from another language. It describes the adaptation process required when such limited resource is used. The main natural languages involved in this research are Malay and Iban language. The thesis includes a study on grapheme to phoneme mapping and the substitution of phonemes. A set of substitution matrices is presented which show the phoneme confusion in term of perception among respondents. The experiments conducted study the intelligibility as well as perception based on context of utterances. The study on the phonetic prosody is then presented and compared to the Klatt duration model. This is to find the similarities of cross language duration model if one exists. Then a comparative study of Iban native speaker with an Iban polyglot TTS using Malay resources is presented. This is to confirm that the prosody of Malay can be used to generate Iban synthesised speech. The central hypothesis of this thesis is that by using a closely-related language resource, a natural sounding speech can be produced. The aim of this research was to show that by sticking to the indigenous language characteristics, it is possible to build a polyglot synthesised speech system even with insufficient speech resources

    Intonation in a text-to-speech conversion system

    Get PDF
    corecore