5,746 research outputs found

    Mostly-Unsupervised Statistical Segmentation of Japanese Kanji Sequences

    Full text link
    Given the lack of word delimiters in written Japanese, word segmentation is generally considered a crucial first step in processing Japanese texts. Typical Japanese segmentation algorithms rely either on a lexicon and syntactic analysis or on pre-segmented data; but these are labor-intensive, and the lexico-syntactic techniques are vulnerable to the unknown word problem. In contrast, we introduce a novel, more robust statistical method utilizing unsegmented training data. Despite its simplicity, the algorithm yields performance on long kanji sequences comparable to and sometimes surpassing that of state-of-the-art morphological analyzers over a variety of error metrics. The algorithm also outperforms another mostly-unsupervised statistical algorithm previously proposed for Chinese. Additionally, we present a two-level annotation scheme for Japanese to incorporate multiple segmentation granularities, and introduce two novel evaluation metrics, both based on the notion of a compatible bracket, that can account for multiple granularities simultaneously.Comment: 22 pages. To appear in Natural Language Engineerin

    CAPT๋ฅผ ์œ„ํ•œ ๋ฐœ์Œ ๋ณ€์ด ๋ถ„์„ ๋ฐ CycleGAN ๊ธฐ๋ฐ˜ ํ”ผ๋“œ๋ฐฑ ์ƒ์„ฑ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :์ธ๋ฌธ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ์ธ์ง€๊ณผํ•™์ „๊ณต,2020. 2. ์ •๋ฏผํ™”.Despite the growing popularity in learning Korean as a foreign language and the rapid development in language learning applications, the existing computer-assisted pronunciation training (CAPT) systems in Korean do not utilize linguistic characteristics of non-native Korean speech. Pronunciation variations in non-native speech are far more diverse than those observed in native speech, which may pose a difficulty in combining such knowledge in an automatic system. Moreover, most of the existing methods rely on feature extraction results from signal processing, prosodic analysis, and natural language processing techniques. Such methods entail limitations since they necessarily depend on finding the right features for the task and the extraction accuracies. This thesis presents a new approach for corrective feedback generation in a CAPT system, in which pronunciation variation patterns and linguistic correlates with accentedness are analyzed and combined with a deep neural network approach, so that feature engineering efforts are minimized while maintaining the linguistically important factors for the corrective feedback generation task. Investigations on non-native Korean speech characteristics in contrast with those of native speakers, and their correlation with accentedness judgement show that both segmental and prosodic variations are important factors in a Korean CAPT system. The present thesis argues that the feedback generation task can be interpreted as a style transfer problem, and proposes to evaluate the idea using generative adversarial network. A corrective feedback generation model is trained on 65,100 read utterances by 217 non-native speakers of 27 mother tongue backgrounds. The features are automatically learnt in an unsupervised way in an auxiliary classifier CycleGAN setting, in which the generator learns to map a foreign accented speech to native speech distributions. In order to inject linguistic knowledge into the network, an auxiliary classifier is trained so that the feedback also identifies the linguistic error types that were defined in the first half of the thesis. The proposed approach generates a corrected version the speech using the learners own voice, outperforming the conventional Pitch-Synchronous Overlap-and-Add method.์™ธ๊ตญ์–ด๋กœ์„œ์˜ ํ•œ๊ตญ์–ด ๊ต์œก์— ๋Œ€ํ•œ ๊ด€์‹ฌ์ด ๊ณ ์กฐ๋˜์–ด ํ•œ๊ตญ์–ด ํ•™์Šต์ž์˜ ์ˆ˜๊ฐ€ ํฌ๊ฒŒ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ์Œ์„ฑ์–ธ์–ด์ฒ˜๋ฆฌ ๊ธฐ์ˆ ์„ ์ ์šฉํ•œ ์ปดํ“จํ„ฐ ๊ธฐ๋ฐ˜ ๋ฐœ์Œ ๊ต์œก(Computer-Assisted Pronunciation Training; CAPT) ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์— ๋Œ€ํ•œ ์—ฐ๊ตฌ ๋˜ํ•œ ์ ๊ทน์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง€๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿผ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ํ˜„์กดํ•˜๋Š” ํ•œ๊ตญ์–ด ๋งํ•˜๊ธฐ ๊ต์œก ์‹œ์Šคํ…œ์€ ์™ธ๊ตญ์ธ์˜ ํ•œ๊ตญ์–ด์— ๋Œ€ํ•œ ์–ธ์–ดํ•™์  ํŠน์ง•์„ ์ถฉ๋ถ„ํžˆ ํ™œ์šฉํ•˜์ง€ ์•Š๊ณ  ์žˆ์œผ๋ฉฐ, ์ตœ์‹  ์–ธ์–ด์ฒ˜๋ฆฌ ๊ธฐ์ˆ  ๋˜ํ•œ ์ ์šฉ๋˜์ง€ ์•Š๊ณ  ์žˆ๋Š” ์‹ค์ •์ด๋‹ค. ๊ฐ€๋Šฅํ•œ ์›์ธ์œผ๋กœ์จ๋Š” ์™ธ๊ตญ์ธ ๋ฐœํ™” ํ•œ๊ตญ์–ด ํ˜„์ƒ์— ๋Œ€ํ•œ ๋ถ„์„์ด ์ถฉ๋ถ„ํ•˜๊ฒŒ ์ด๋ฃจ์–ด์ง€์ง€ ์•Š์•˜๋‹ค๋Š” ์ , ๊ทธ๋ฆฌ๊ณ  ๊ด€๋ จ ์—ฐ๊ตฌ๊ฐ€ ์žˆ์–ด๋„ ์ด๋ฅผ ์ž๋™ํ™”๋œ ์‹œ์Šคํ…œ์— ๋ฐ˜์˜ํ•˜๊ธฐ์—๋Š” ๊ณ ๋„ํ™”๋œ ์—ฐ๊ตฌ๊ฐ€ ํ•„์š”ํ•˜๋‹ค๋Š” ์ ์ด ์žˆ๋‹ค. ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ CAPT ๊ธฐ์ˆ  ์ „๋ฐ˜์ ์œผ๋กœ๋Š” ์‹ ํ˜ธ์ฒ˜๋ฆฌ, ์šด์œจ ๋ถ„์„, ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๊ธฐ๋ฒ•๊ณผ ๊ฐ™์€ ํŠน์ง• ์ถ”์ถœ์— ์˜์กดํ•˜๊ณ  ์žˆ์–ด์„œ ์ ํ•ฉํ•œ ํŠน์ง•์„ ์ฐพ๊ณ  ์ด๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์ถ”์ถœํ•˜๋Š” ๋ฐ์— ๋งŽ์€ ์‹œ๊ฐ„๊ณผ ๋…ธ๋ ฅ์ด ํ•„์š”ํ•œ ์‹ค์ •์ด๋‹ค. ์ด๋Š” ์ตœ์‹  ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์–ธ์–ด์ฒ˜๋ฆฌ ๊ธฐ์ˆ ์„ ํ™œ์šฉํ•จ์œผ๋กœ์จ ์ด ๊ณผ์ • ๋˜ํ•œ ๋ฐœ์ „์˜ ์—ฌ์ง€๊ฐ€ ๋งŽ๋‹ค๋Š” ๋ฐ”๋ฅผ ์‹œ์‚ฌํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ๋Š” ๋จผ์ € CAPT ์‹œ์Šคํ…œ ๊ฐœ๋ฐœ์— ์žˆ์–ด ๋ฐœ์Œ ๋ณ€์ด ์–‘์ƒ๊ณผ ์–ธ์–ดํ•™์  ์ƒ๊ด€๊ด€๊ณ„๋ฅผ ๋ถ„์„ํ•˜์˜€๋‹ค. ์™ธ๊ตญ์ธ ํ™”์ž๋“ค์˜ ๋‚ญ๋…์ฒด ๋ณ€์ด ์–‘์ƒ๊ณผ ํ•œ๊ตญ์–ด ์›์–ด๋ฏผ ํ™”์ž๋“ค์˜ ๋‚ญ๋…์ฒด ๋ณ€์ด ์–‘์ƒ์„ ๋Œ€์กฐํ•˜๊ณ  ์ฃผ์š”ํ•œ ๋ณ€์ด๋ฅผ ํ™•์ธํ•œ ํ›„, ์ƒ๊ด€๊ด€๊ณ„ ๋ถ„์„์„ ํ†ตํ•˜์—ฌ ์˜์‚ฌ์†Œํ†ต์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ์ค‘์š”๋„๋ฅผ ํŒŒ์•…ํ•˜์˜€๋‹ค. ๊ทธ ๊ฒฐ๊ณผ, ์ข…์„ฑ ์‚ญ์ œ์™€ 3์ค‘ ๋Œ€๋ฆฝ์˜ ํ˜ผ๋™, ์ดˆ๋ถ„์ ˆ ๊ด€๋ จ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ๊ฒฝ์šฐ ํ”ผ๋“œ๋ฐฑ ์ƒ์„ฑ์— ์šฐ์„ ์ ์œผ๋กœ ๋ฐ˜์˜ํ•˜๋Š” ๊ฒƒ์ด ํ•„์š”ํ•˜๋‹ค๋Š” ๊ฒƒ์ด ํ™•์ธ๋˜์—ˆ๋‹ค. ๊ต์ •๋œ ํ”ผ๋“œ๋ฐฑ์„ ์ž๋™์œผ๋กœ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์€ CAPT ์‹œ์Šคํ…œ์˜ ์ค‘์š”ํ•œ ๊ณผ์ œ ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ด ๊ณผ์ œ๊ฐ€ ๋ฐœํ™”์˜ ์Šคํƒ€์ผ ๋ณ€ํ™”์˜ ๋ฌธ์ œ๋กœ ํ•ด์„์ด ๊ฐ€๋Šฅํ•˜๋‹ค๊ณ  ๋ณด์•˜์œผ๋ฉฐ, ์ƒ์„ฑ์  ์ ๋Œ€ ์‹ ๊ฒฝ๋ง (Cycle-consistent Generative Adversarial Network; CycleGAN) ๊ตฌ์กฐ์—์„œ ๋ชจ๋ธ๋งํ•˜๋Š” ๊ฒƒ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. GAN ๋„คํŠธ์›Œํฌ์˜ ์ƒ์„ฑ๋ชจ๋ธ์€ ๋น„์›์–ด๋ฏผ ๋ฐœํ™”์˜ ๋ถ„ํฌ์™€ ์›์–ด๋ฏผ ๋ฐœํ™” ๋ถ„ํฌ์˜ ๋งคํ•‘์„ ํ•™์Šตํ•˜๋ฉฐ, Cycle consistency ์†์‹คํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ ๋ฐœํ™”๊ฐ„ ์ „๋ฐ˜์ ์ธ ๊ตฌ์กฐ๋ฅผ ์œ ์ง€ํ•จ๊ณผ ๋™์‹œ์— ๊ณผ๋„ํ•œ ๊ต์ •์„ ๋ฐฉ์ง€ํ•˜์˜€๋‹ค. ๋ณ„๋„์˜ ํŠน์ง• ์ถ”์ถœ ๊ณผ์ •์ด ์—†์ด ํ•„์š”ํ•œ ํŠน์ง•๋“ค์ด CycleGAN ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ๋ฌด๊ฐ๋… ๋ฐฉ๋ฒ•์œผ๋กœ ์Šค์Šค๋กœ ํ•™์Šต๋˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ, ์–ธ์–ด ํ™•์žฅ์ด ์šฉ์ดํ•œ ๋ฐฉ๋ฒ•์ด๋‹ค. ์–ธ์–ดํ•™์  ๋ถ„์„์—์„œ ๋“œ๋Ÿฌ๋‚œ ์ฃผ์š”ํ•œ ๋ณ€์ด๋“ค ๊ฐ„์˜ ์šฐ์„ ์ˆœ์œ„๋Š” Auxiliary Classifier CycleGAN ๊ตฌ์กฐ์—์„œ ๋ชจ๋ธ๋งํ•˜๋Š” ๊ฒƒ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ ๊ธฐ์กด์˜ CycleGAN์— ์ง€์‹์„ ์ ‘๋ชฉ์‹œ์ผœ ํ”ผ๋“œ๋ฐฑ ์Œ์„ฑ์„ ์ƒ์„ฑํ•จ๊ณผ ๋™์‹œ์— ํ•ด๋‹น ํ”ผ๋“œ๋ฐฑ์ด ์–ด๋–ค ์œ ํ˜•์˜ ์˜ค๋ฅ˜์ธ์ง€ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฌธ์ œ๋ฅผ ์ˆ˜ํ–‰ํ•œ๋‹ค. ์ด๋Š” ๋„๋ฉ”์ธ ์ง€์‹์ด ๊ต์ • ํ”ผ๋“œ๋ฐฑ ์ƒ์„ฑ ๋‹จ๊ณ„๊นŒ์ง€ ์œ ์ง€๋˜๊ณ  ํ†ต์ œ๊ฐ€ ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ์žฅ์ ์ด ์žˆ๋‹ค๋Š” ๋ฐ์— ๊ทธ ์˜์˜๊ฐ€ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด์„œ 27๊ฐœ์˜ ๋ชจ๊ตญ์–ด๋ฅผ ๊ฐ–๋Š” 217๋ช…์˜ ์œ ์˜๋ฏธ ์–ดํœ˜ ๋ฐœํ™” 65,100๊ฐœ๋กœ ํ”ผ๋“œ๋ฐฑ ์ž๋™ ์ƒ์„ฑ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ , ๊ฐœ์„  ์—ฌ๋ถ€ ๋ฐ ์ •๋„์— ๋Œ€ํ•œ ์ง€๊ฐ ํ‰๊ฐ€๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์˜€์„ ๋•Œ ํ•™์Šต์ž ๋ณธ์ธ์˜ ๋ชฉ์†Œ๋ฆฌ๋ฅผ ์œ ์ง€ํ•œ ์ฑ„ ๊ต์ •๋œ ๋ฐœ์Œ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€๋Šฅํ•˜๋ฉฐ, ์ „ํ†ต์ ์ธ ๋ฐฉ๋ฒ•์ธ ์Œ๋†’์ด ๋™๊ธฐ์‹ ์ค‘์ฒฉ๊ฐ€์‚ฐ (Pitch-Synchronous Overlap-and-Add) ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋น„ํ•ด ์ƒ๋Œ€ ๊ฐœ์„ ๋ฅ  16.67%์ด ํ™•์ธ๋˜์—ˆ๋‹ค.Chapter 1. Introduction 1 1.1. Motivation 1 1.1.1. An Overview of CAPT Systems 3 1.1.2. Survey of existing Korean CAPT Systems 5 1.2. Problem Statement 7 1.3. Thesis Structure 7 Chapter 2. Pronunciation Analysis of Korean Produced by Chinese 9 2.1. Comparison between Korean and Chinese 11 2.1.1. Phonetic and Syllable Structure Comparisons 11 2.1.2. Phonological Comparisons 14 2.2. Related Works 16 2.3. Proposed Analysis Method 19 2.3.1. Corpus 19 2.3.2. Transcribers and Agreement Rates 22 2.4. Salient Pronunciation Variations 22 2.4.1. Segmental Variation Patterns 22 2.4.1.1. Discussions 25 2.4.2. Phonological Variation Patterns 26 2.4.1.2. Discussions 27 2.5. Summary 29 Chapter 3. Correlation Analysis of Pronunciation Variations and Human Evaluation 30 3.1. Related Works 31 3.1.1. Criteria used in L2 Speech 31 3.1.2. Criteria used in L2 Korean Speech 32 3.2. Proposed Human Evaluation Method 36 3.2.1. Reading Prompt Design 36 3.2.2. Evaluation Criteria Design 37 3.2.3. Raters and Agreement Rates 40 3.3. Linguistic Factors Affecting L2 Korean Accentedness 41 3.3.1. Pearsons Correlation Analysis 41 3.3.2. Discussions 42 3.3.3. Implications for Automatic Feedback Generation 44 3.4. Summary 45 Chapter 4. Corrective Feedback Generation for CAPT 46 4.1. Related Works 46 4.1.1. Prosody Transplantation 47 4.1.2. Recent Speech Conversion Methods 49 4.1.3. Evaluation of Corrective Feedback 50 4.2. Proposed Method: Corrective Feedback as a Style Transfer 51 4.2.1. Speech Analysis at Spectral Domain 53 4.2.2. Self-imitative Learning 55 4.2.3. An Analogy: CAPT System and GAN Architecture 57 4.3. Generative Adversarial Networks 59 4.3.1. Conditional GAN 61 4.3.2. CycleGAN 62 4.4. Experiment 63 4.4.1. Corpus 64 4.4.2. Baseline Implementation 65 4.4.3. Adversarial Training Implementation 65 4.4.4. Spectrogram-to-Spectrogram Training 66 4.5. Results and Evaluation 69 4.5.1. Spectrogram Generation Results 69 4.5.2. Perceptual Evaluation 70 4.5.3. Discussions 72 4.6. Summary 74 Chapter 5. Integration of Linguistic Knowledge in an Auxiliary Classifier CycleGAN for Feedback Generation 75 5.1. Linguistic Class Selection 75 5.2. Auxiliary Classifier CycleGAN Design 77 5.3. Experiment and Results 80 5.3.1. Corpus 80 5.3.2. Feature Annotations 81 5.3.3. Experiment Setup 81 5.3.4. Results 82 5.4. Summary 84 Chapter 6. Conclusion 86 6.1. Thesis Results 86 6.2. Thesis Contributions 88 6.3. Recommendations for Future Work 89 Bibliography 91 Appendix 107 Abstract in Korean 117 Acknowledgments 120Docto

    Introducing nativization to Spanish TTS systems

    Full text link
    In the modern world, speech technologies must be flexible and adaptable to any framework. Mass media globalization introduces multilingualism as a challenge for the most popular speech applications such as text-to-speech synthesis and automatic speech recognition. Mixed-language texts vary in their nature and when processed, some essential characteristics must be considered. In Spain and other Spanish-speaking countries, the use of Anglicisms and other words of foreign origin is constantly growing. A particularity of peninsular Spanish is that there is a tendency to nativize the pronunciation of non-Spanish words so that they fit properly into Spanish phonetic patterns. In our previous work, we proposed to use hand-crafted nativization tables that were capable of nativizing correctly 24% of words from the test data. In this work, our goal was to approach the nativization challenge by data-driven methods, because they are transferable to other languages and do not drop in performance in comparison with explicit rules manually written by experts. Training and test corpora for nativization consisted of 1000 and 100 words respectively and were crafted manually. Different specifications of nativization by analogy and learning from errors focused on finding the best nativized pronunciation of foreign words. The best obtained objective nativization results showed an improvement from 24% to 64% in word accuracy in comparison to our previous work. Furthermore, a subjective evaluation of the synthesized speech allowed for the conclusion that nativization by analogy is clearly the preferred method among listeners of different backgrounds when comparing to previously proposed methods. These results were quite encouraging and proved that even a small training corpus is sufficient for achieving significant improvements in naturalness for English inclusions of variable length in Spanish utterances.Peer ReviewedPostprint (published version

    VALICO-UD: annotating an Italian learner corpus

    Get PDF
    Previous work on learner language has highlighted the importance of having annotated resources to describe the development of interlanguage. Despite this, few learner resources, mainly for English L2, feature error and syntactic annotation. This thesis describes the development of a novel parallel learner Italian treebank, VALICO-UD. Its name suggests two main points: where the data comes fromโ€”i.e. the corpus VALICO, a collection of non-native Italian texts elicited by comic stripsโ€”and what formalism is used for linguistic annotationโ€”i.e. Universal Dependencies (UD) formalism. It is a parallel treebank because the resource provides for each learner sentence (LS) a target hypothesis (TH) (i.e., parallel corrected version written by an Italian native speaker) which is in turn annotated in UD. We developed this treebank to be exploitable for interlanguage research and comparable with the resources employed in Natural Language Processing tasks such as Native Language Identification or Grammatical Error Identification and Correction. VALICO-UD is composed of 237 texts written by English, French, German and Spanish native speakers, which correspond to 2,234 LSs, each associated with a single TH. While all LSs and THs were automatically annotated using UDPipe, only a portion of the treebank made of 398 LSs plus correspondent THs has been manually corrected and released in May 2021 in the UD repository. This core section features also an explicit XML-based annotation of the errors occurring in each sentence. Thus, the treebank is currently organized in two sections: the core gold standardโ€”comprising 398 LSs and their correspondent THsโ€”and the silver standardโ€”consisting of 1,836 LSs and their correspondent THs. In order to contribute to the computational investigation about the peculiar type of texts included in VALICO-UD, this thesis describes the annotation schema of the resource, provides some preliminary tests about the performance of UDPipe models on this treebank, reports on inter-annotator agreement results for both error and linguistic annotation, and suggests some possible applications

    Speaker Independent Acoustic-to-Articulatory Inversion

    Get PDF
    Acoustic-to-articulatory inversion, the determination of articulatory parameters from acoustic signals, is a difficult but important problem for many speech processing applications, such as automatic speech recognition (ASR) and computer aided pronunciation training (CAPT). In recent years, several approaches have been successfully implemented for speaker dependent models with parallel acoustic and kinematic training data. However, in many practical applications inversion is needed for new speakers for whom no articulatory data is available. In order to address this problem, this dissertation introduces a novel speaker adaptation approach called Parallel Reference Speaker Weighting (PRSW), based on parallel acoustic and articulatory Hidden Markov Models (HMM). This approach uses a robust normalized articulatory space and palate referenced articulatory features combined with speaker-weighted adaptation to form an inversion mapping for new speakers that can accurately estimate articulatory trajectories. The proposed PRSW method is evaluated on the newly collected Marquette electromagnetic articulography - Mandarin Accented English (EMA-MAE) corpus using 20 native English speakers. Cross-speaker inversion results show that given a good selection of reference speakers with consistent acoustic and articulatory patterns, the PRSW approach gives good speaker independent inversion performance even without kinematic training data

    Essential Speech and Language Technology for Dutch: Results by the STEVIN-programme

    Get PDF
    Computational Linguistics; Germanic Languages; Artificial Intelligence (incl. Robotics); Computing Methodologie

    Effective Spell Checking Methods Using Clustering Algorithms

    Get PDF
    This paper presents a novel approach to spell checking using dictionary clustering. The main goal is to reduce the number of times distances have to be calculated when finding target words for misspellings. The method is unsupervised and combines the application of anomalous pattern initialization and partition around medoids (PAM). To evaluate the method, we used an English misspelling list compiled using real examples extracted from the Birkbeck spelling error corpus.Final Published versio

    Beyond topic-based representations for text mining

    Get PDF
    A massive amount of online information is natural language text: newspapers, blog articles, forum posts and comments, tweets, scientific literature, government documents, and more. While in general, all kinds of online information is useful, textual information is especially importantโ€”it is the most natural, most common, and most expressive form of information. Text representation plays a critical role in application tasks like classification or information retrieval since the quality of the underlying feature space directly impacts each task's performance. Because of this importance, many different approaches have been developed for generating text representations. By far, the most common way to generate features is to segment text into words and record their n-grams. While simple term features perform relatively well in topic-based tasks, not all downstream applications are of a topical nature and can be captured by words alone. For example, determining the native language of an English essay writer will depend on more than just word choice. Competing methods to topic-based representations (such as neural networks) are often not interpretable or rely on massive amounts of training data. This thesis proposes three novel contributions to generate and analyze a large space of non-topical features. First, structural parse tree features are solely based on structural properties of a parse tree by ignoring all of the syntactic categories in the tree. An important advantage of these "skeletons" over regular syntactic features is that they can capture global tree structures without causing problems of data sparseness or overfitting. Second, SyntacticDiff explicitly captures differences in a text document with respect to a reference corpus, creating features that are easily explained as weighted word edit differences. These edit features are especially useful since they are derived from information not present in the current document, capturing a type of comparative feature. Third, Cross-Context Lexical Analysis is a general framework for analyzing similarities and differences in both term meaning and representation with respect to different, potentially overlapping partitions of a text collection. The representations analyzed by CCLA are not limited to topic-based features
    • โ€ฆ
    corecore