113 research outputs found

    Contextual Phonetic Pretraining for End-to-end Utterance-level Language and Speaker Recognition

    Full text link
    Pretrained contextual word representations in NLP have greatly improved performance on various downstream tasks. For speech, we propose contextual frame representations that capture phonetic information at the acoustic frame level and can be used for utterance-level language, speaker, and speech recognition. These representations come from the frame-wise intermediate representations of an end-to-end, self-attentive ASR model (SAN-CTC) on spoken utterances. We first train the model on the Fisher English corpus with context-independent phoneme labels, then use its representations at inference time as features for task-specific models on the NIST LRE07 closed-set language recognition task and a Fisher speaker recognition task, giving significant improvements over the state-of-the-art on both (e.g., language EER of 4.68% on 3sec utterances, 23% relative reduction in speaker EER). Results remain competitive when using a novel dilated convolutional model for language recognition, or when ASR pretraining is done with character labels only.Comment: submitted to INTERSPEECH 201

    Neural approaches to spoken content embedding

    Full text link
    Comparing spoken segments is a central operation to speech processing. Traditional approaches in this area have favored frame-level dynamic programming algorithms, such as dynamic time warping, because they require no supervision, but they are limited in performance and efficiency. As an alternative, acoustic word embeddings -- fixed-dimensional vector representations of variable-length spoken word segments -- have begun to be considered for such tasks as well. However, the current space of such discriminative embedding models, training approaches, and their application to real-world downstream tasks is limited. We start by considering ``single-view" training losses where the goal is to learn an acoustic word embedding model that separates same-word and different-word spoken segment pairs. Then, we consider ``multi-view" contrastive losses. In this setting, acoustic word embeddings are learned jointly with embeddings of character sequences to generate acoustically grounded embeddings of written words, or acoustically grounded word embeddings. In this thesis, we contribute new discriminative acoustic word embedding (AWE) and acoustically grounded word embedding (AGWE) approaches based on recurrent neural networks (RNNs). We improve model training in terms of both efficiency and performance. We take these developments beyond English to several low-resource languages and show that multilingual training improves performance when labeled data is limited. We apply our embedding models, both monolingual and multilingual, to the downstream tasks of query-by-example speech search and automatic speech recognition. Finally, we show how our embedding approaches compare with and complement more recent self-supervised speech models.Comment: PhD thesi

    μŒμ„±μ–Έμ–΄ μ΄ν•΄μ—μ„œμ˜ μ€‘μ˜μ„± ν•΄μ†Œ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·정보곡학뢀, 2022. 8. κΉ€λ‚¨μˆ˜.μ–Έμ–΄μ˜ μ€‘μ˜μ„±μ€ 필연적이닀. 그것은 μ–Έμ–΄κ°€ μ˜μ‚¬ μ†Œν†΅μ˜ μˆ˜λ‹¨μ΄μ§€λ§Œ, λͺ¨λ“  μ‚¬λžŒμ΄ μƒκ°ν•˜λŠ” μ–΄λ–€ κ°œλ…μ΄ μ™„λ²½νžˆ λ™μΌν•˜κ²Œ 전달될 수 μ—†λŠ” 것에 κΈ°μΈν•œλ‹€. μ΄λŠ” 필연적인 μš”μ†Œμ΄κΈ°λ„ ν•˜μ§€λ§Œ, μ–Έμ–΄ μ΄ν•΄μ—μ„œ μ€‘μ˜μ„±μ€ μ’…μ’… μ˜μ‚¬ μ†Œν†΅μ˜ λ‹¨μ ˆμ΄λ‚˜ μ‹€νŒ¨λ₯Ό κ°€μ Έμ˜€κΈ°λ„ ν•œλ‹€. μ–Έμ–΄μ˜ μ€‘μ˜μ„±μ—λŠ” λ‹€μ–‘ν•œ μΈ΅μœ„κ°€ μ‘΄μž¬ν•œλ‹€. ν•˜μ§€λ§Œ, λͺ¨λ“  μƒν™©μ—μ„œ μ€‘μ˜μ„±μ΄ ν•΄μ†Œλ  ν•„μš”λŠ” μ—†λ‹€. νƒœμŠ€ν¬λ§ˆλ‹€, λ„λ©”μΈλ§ˆλ‹€ λ‹€λ₯Έ μ–‘μƒμ˜ μ€‘μ˜μ„±μ΄ μ‘΄μž¬ν•˜λ©°, 이λ₯Ό 잘 μ •μ˜ν•˜κ³  ν•΄μ†Œλ  수 μžˆλŠ” μ€‘μ˜μ„±μž„μ„ νŒŒμ•…ν•œ ν›„ μ€‘μ˜μ μΈ λΆ€λΆ„ κ°„μ˜ 경계λ₯Ό 잘 μ •ν•˜λŠ” 것이 μ€‘μš”ν•˜λ‹€. λ³Έκ³ μ—μ„œλŠ” μŒμ„± μ–Έμ–΄ 처리, 특히 μ˜λ„ 이해에 μžˆμ–΄ μ–΄λ–€ μ–‘μƒμ˜ μ€‘μ˜μ„±μ΄ λ°œμƒν•  수 μžˆλŠ”μ§€ μ•Œμ•„λ³΄κ³ , 이λ₯Ό ν•΄μ†Œν•˜κΈ° μœ„ν•œ 연ꡬλ₯Ό μ§„ν–‰ν•œλ‹€. μ΄λŸ¬ν•œ ν˜„μƒμ€ λ‹€μ–‘ν•œ μ–Έμ–΄μ—μ„œ λ°œμƒν•˜μ§€λ§Œ, κ·Έ 정도 및 양상은 언어에 λ”°λΌμ„œ λ‹€λ₯΄κ²Œ λ‚˜νƒ€λ‚˜λŠ” κ²½μš°κ°€ λ§Žλ‹€. 우리의 μ—°κ΅¬μ—μ„œ μ£Όλͺ©ν•˜λŠ” 뢀뢄은, μŒμ„± 언어에 λ‹΄κΈ΄ μ •λ³΄λŸ‰κ³Ό 문자 μ–Έμ–΄μ˜ μ •λ³΄λŸ‰ 차이둜 인해 μ€‘μ˜μ„±μ΄ λ°œμƒν•˜λŠ” κ²½μš°λ“€μ΄λ‹€. λ³Έ μ—°κ΅¬λŠ” 운율(prosody)에 따라 λ¬Έμž₯ ν˜•μ‹ 및 μ˜λ„κ°€ λ‹€λ₯΄κ²Œ ν‘œν˜„λ˜λŠ” κ²½μš°κ°€ λ§Žμ€ ν•œκ΅­μ–΄λ₯Ό λŒ€μƒμœΌλ‘œ μ§„ν–‰λœλ‹€. ν•œκ΅­μ–΄μ—μ„œλŠ” λ‹€μ–‘ν•œ κΈ°λŠ₯이 μžˆλŠ”(multi-functionalν•œ) μ’…κ²°μ–΄λ―Έ(sentence ender), λΉˆλ²ˆν•œ νƒˆλ½ ν˜„μƒ(pro-drop), μ˜λ¬Έμ‚¬ κ°„μ„­(wh-intervention) λ“±μœΌλ‘œ 인해, 같은 ν…μŠ€νŠΈκ°€ μ—¬λŸ¬ μ˜λ„λ‘œ μ½νžˆλŠ” ν˜„μƒμ΄ λ°œμƒν•˜κ³€ ν•œλ‹€. 이것이 μ˜λ„ 이해에 ν˜Όμ„ μ„ κ°€μ Έμ˜¬ 수 μžˆλ‹€λŠ” 데에 μ°©μ•ˆν•˜μ—¬, λ³Έ μ—°κ΅¬μ—μ„œλŠ” μ΄λŸ¬ν•œ μ€‘μ˜μ„±μ„ λ¨Όμ € μ •μ˜ν•˜κ³ , μ€‘μ˜μ μΈ λ¬Έμž₯듀을 감지할 수 μžˆλ„λ‘ λ§λ­‰μΉ˜λ₯Ό κ΅¬μΆ•ν•œλ‹€. μ˜λ„ 이해λ₯Ό μœ„ν•œ λ§λ­‰μΉ˜λ₯Ό κ΅¬μΆ•ν•˜λŠ” κ³Όμ •μ—μ„œ λ¬Έμž₯의 지ν–₯μ„±(directivity)κ³Ό μˆ˜μ‚¬μ„±(rhetoricalness)이 κ³ λ €λœλ‹€. 이것은 μŒμ„± μ–Έμ–΄μ˜ μ˜λ„λ₯Ό μ„œμˆ , 질문, λͺ…λ Ή, μˆ˜μ‚¬μ˜λ¬Έλ¬Έ, 그리고 μˆ˜μ‚¬λͺ…λ Ήλ¬ΈμœΌλ‘œ κ΅¬λΆ„ν•˜κ²Œ ν•˜λŠ” 기쀀이 λœλ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ” 기둝된 μŒμ„± μ–Έμ–΄(spoken language)λ₯Ό μΆ©λΆ„νžˆ 높은 μΌμΉ˜λ„(kappa = 0.85)둜 μ£Όμ„ν•œ λ§λ­‰μΉ˜λ₯Ό μ΄μš©ν•΄, μŒμ„±μ΄ 주어지지 μ•Šμ€ μƒν™©μ—μ„œ μ€‘μ˜μ μΈ ν…μŠ€νŠΈλ₯Ό κ°μ§€ν•˜λŠ” 데에 μ–΄λ–€ μ „λž΅ ν˜Ήμ€ μ–Έμ–΄ λͺ¨λΈμ΄ νš¨κ³Όμ μΈκ°€λ₯Ό 보이고, ν•΄λ‹Ή νƒœμŠ€ν¬μ˜ νŠΉμ§•μ„ μ •μ„±μ μœΌλ‘œ λΆ„μ„ν•œλ‹€. λ˜ν•œ, μš°λ¦¬λŠ” ν…μŠ€νŠΈ μΈ΅μœ„μ—μ„œλ§Œ μ€‘μ˜μ„±μ— μ ‘κ·Όν•˜μ§€ μ•Šκ³ , μ‹€μ œλ‘œ μŒμ„±μ΄ 주어진 μƒν™©μ—μ„œ μ€‘μ˜μ„± ν•΄μ†Œ(disambiguation)κ°€ κ°€λŠ₯ν•œμ§€λ₯Ό μ•Œμ•„λ³΄κΈ° μœ„ν•΄, ν…μŠ€νŠΈκ°€ μ€‘μ˜μ μΈ λ°œν™”λ“€λ§ŒμœΌλ‘œ κ΅¬μ„±λœ 인곡적인 μŒμ„± λ§λ­‰μΉ˜λ₯Ό μ„€κ³„ν•˜κ³  λ‹€μ–‘ν•œ 집쀑(attention) 기반 신경망(neural network) λͺ¨λΈλ“€μ„ μ΄μš©ν•΄ μ€‘μ˜μ„±μ„ ν•΄μ†Œν•œλ‹€. 이 κ³Όμ •μ—μ„œ λͺ¨λΈ 기반 톡사적/의미적 μ€‘μ˜μ„± ν•΄μ†Œκ°€ μ–΄λ– ν•œ κ²½μš°μ— κ°€μž₯ νš¨κ³Όμ μΈμ§€ κ΄€μ°°ν•˜κ³ , μΈκ°„μ˜ μ–Έμ–΄ μ²˜λ¦¬μ™€ μ–΄λ–€ 연관이 μžˆλŠ”μ§€μ— λŒ€ν•œ 관점을 μ œμ‹œν•œλ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ” λ§ˆμ§€λ§‰μœΌλ‘œ, μœ„μ™€ 같은 절차둜 μ˜λ„ 이해 κ³Όμ •μ—μ„œμ˜ μ€‘μ˜μ„±μ΄ ν•΄μ†Œλ˜μ—ˆμ„ 경우, 이λ₯Ό μ–΄λ–»κ²Œ 산업계 ν˜Ήμ€ 연ꡬ λ‹¨μ—μ„œ ν™œμš©ν•  수 μžˆλŠ”κ°€μ— λŒ€ν•œ κ°„λž΅ν•œ λ‘œλ“œλ§΅μ„ μ œμ‹œν•œλ‹€. ν…μŠ€νŠΈμ— κΈ°λ°˜ν•œ μ€‘μ˜μ„± νŒŒμ•…κ³Ό μŒμ„± 기반의 μ˜λ„ 이해 λͺ¨λ“ˆμ„ ν†΅ν•©ν•œλ‹€λ©΄, 였λ₯˜μ˜ μ „νŒŒλ₯Ό μ€„μ΄λ©΄μ„œλ„ 효율적으둜 μ€‘μ˜μ„±μ„ λ‹€λ£° 수 μžˆλŠ” μ‹œμŠ€ν…œμ„ λ§Œλ“€ 수 μžˆμ„ 것이닀. μ΄λŸ¬ν•œ μ‹œμŠ€ν…œμ€ λŒ€ν™” λ§€λ‹ˆμ €(dialogue manager)와 ν†΅ν•©λ˜μ–΄ κ°„λ‹¨ν•œ λŒ€ν™”(chit-chat)κ°€ κ°€λŠ₯ν•œ λͺ©μ  지ν–₯ λŒ€ν™” μ‹œμŠ€ν…œ(task-oriented dialogue system)을 ꡬ좕할 μˆ˜λ„ 있고, 단일 μ–Έμ–΄ 쑰건(monolingual condition)을 λ„˜μ–΄ μŒμ„± λ²ˆμ—­μ—μ„œμ˜ μ—λŸ¬λ₯Ό μ€„μ΄λŠ” 데에 ν™œμš©λ  μˆ˜λ„ μžˆλ‹€. μš°λ¦¬λŠ” λ³Έκ³ λ₯Ό 톡해, μš΄μœ¨μ— λ―Όκ°ν•œ(prosody-sensitive) μ–Έμ–΄μ—μ„œ μ˜λ„ 이해λ₯Ό μœ„ν•œ μ€‘μ˜μ„± ν•΄μ†Œκ°€ κ°€λŠ₯ν•˜λ©°, 이λ₯Ό μ‚°μ—… 및 연ꡬ λ‹¨μ—μ„œ ν™œμš©ν•  수 μžˆμŒμ„ 보이고자 ν•œλ‹€. λ³Έ 연ꡬ가 λ‹€λ₯Έ μ–Έμ–΄ 및 λ„λ©”μΈμ—μ„œλ„ 고질적인 μ€‘μ˜μ„± 문제λ₯Ό ν•΄μ†Œν•˜λŠ” 데에 도움이 되길 바라며, 이λ₯Ό μœ„ν•΄ 연ꡬλ₯Ό μ§„ν–‰ν•˜λŠ” 데에 ν™œμš©λœ λ¦¬μ†ŒμŠ€, κ²°κ³Όλ¬Ό 및 μ½”λ“œλ“€μ„ κ³΅μœ ν•¨μœΌλ‘œμ¨ ν•™κ³„μ˜ λ°œμ „μ— μ΄λ°”μ§€ν•˜κ³ μž ν•œλ‹€.Ambiguity in the language is inevitable. It is because, albeit language is a means of communication, a particular concept that everyone thinks of cannot be conveyed in a perfectly identical manner. As this is an inevitable factor, ambiguity in language understanding often leads to breakdown or failure of communication. There are various hierarchies of language ambiguity. However, not all ambiguity needs to be resolved. Different aspects of ambiguity exist for each domain and task, and it is crucial to define the boundary after recognizing the ambiguity that can be well-defined and resolved. In this dissertation, we investigate the types of ambiguity that appear in spoken language processing, especially in intention understanding, and conduct research to define and resolve it. Although this phenomenon occurs in various languages, its degree and aspect depend on the language investigated. The factor we focus on is cases where the ambiguity comes from the gap between the amount of information in the spoken language and the text. Here, we study the Korean language, which often shows different sentence structures and intentions depending on the prosody. In the Korean language, a text is often read with multiple intentions due to multi-functional sentence enders, frequent pro-drop, wh-intervention, etc. We first define this type of ambiguity and construct a corpus that helps detect ambiguous sentences, given that such utterances can be problematic for intention understanding. In constructing a corpus for intention understanding, we consider the directivity and rhetoricalness of a sentence. They make up a criterion for classifying the intention of spoken language into a statement, question, command, rhetorical question, and rhetorical command. Using the corpus annotated with sufficiently high agreement on a spoken language corpus, we show that colloquial corpus-based language models are effective in classifying ambiguous text given only textual data, and qualitatively analyze the characteristics of the task. We do not handle ambiguity only at the text level. To find out whether actual disambiguation is possible given a speech input, we design an artificial spoken language corpus composed only of ambiguous sentences, and resolve ambiguity with various attention-based neural network architectures. In this process, we observe that the ambiguity resolution is most effective when both textual and acoustic input co-attends each feature, especially when the audio processing module conveys attention information to the text module in a multi-hop manner. Finally, assuming the case that the ambiguity of intention understanding is resolved by proposed strategies, we present a brief roadmap of how the results can be utilized at the industry or research level. By integrating text-based ambiguity detection and speech-based intention understanding module, we can build a system that handles ambiguity efficiently while reducing error propagation. Such a system can be integrated with dialogue managers to make up a task-oriented dialogue system capable of chit-chat, or it can be used for error reduction in multilingual circumstances such as speech translation, beyond merely monolingual conditions. Throughout the dissertation, we want to show that ambiguity resolution for intention understanding in prosody-sensitive language can be achieved and can be utilized at the industry or research level. We hope that this study helps tackle chronic ambiguity issues in other languages ​​or other domains, linking linguistic science and engineering approaches.1 Introduction 1 1.1 Motivation 2 1.2 Research Goal 4 1.3 Outline of the Dissertation 5 2 Related Work 6 2.1 Spoken Language Understanding 6 2.2 Speech Act and Intention 8 2.2.1 Performatives and statements 8 2.2.2 Illocutionary act and speech act 9 2.2.3 Formal semantic approaches 11 2.3 Ambiguity of Intention Understanding in Korean 14 2.3.1 Ambiguities in language 14 2.3.2 Speech act and intention understanding in Korean 16 3 Ambiguity in Intention Understanding of Spoken Language 20 3.1 Intention Understanding and Ambiguity 20 3.2 Annotation Protocol 23 3.2.1 Fragments 24 3.2.2 Clear-cut cases 26 3.2.3 Intonation-dependent utterances 28 3.3 Data Construction . 32 3.3.1 Source scripts 32 3.3.2 Agreement 32 3.3.3 Augmentation 33 3.3.4 Train split 33 3.4 Experiments and Results 34 3.4.1 Models 34 3.4.2 Implementation 36 3.4.3 Results 37 3.5 Findings and Summary 44 3.5.1 Findings 44 3.5.2 Summary 45 4 Disambiguation of Speech Intention 47 4.1 Ambiguity Resolution 47 4.1.1 Prosody and syntax 48 4.1.2 Disambiguation with prosody 50 4.1.3 Approaches in SLU 50 4.2 Dataset Construction 51 4.2.1 Script generation 52 4.2.2 Label tagging 54 4.2.3 Recording 56 4.3 Experiments and Results 57 4.3.1 Models 57 4.3.2 Results 60 4.4 Summary 63 5 System Integration and Application 65 5.1 System Integration for Intention Identification 65 5.1.1 Proof of concept 65 5.1.2 Preliminary study 69 5.2 Application to Spoken Dialogue System 75 5.2.1 What is 'Free-running' 76 5.2.2 Omakase chatbot 76 5.3 Beyond Monolingual Approaches 84 5.3.1 Spoken language translation 85 5.3.2 Dataset 87 5.3.3 Analysis 94 5.3.4 Discussion 95 5.4 Summary 100 6 Conclusion and Future Work 103 Bibliography 105 Abstract (In Korean) 124 Acknowledgment 126λ°•

    Transferring speech-generic and depression-specific knowledge for Alzheimer's disease detection

    Full text link
    The detection of Alzheimer's disease (AD) from spontaneous speech has attracted increasing attention while the sparsity of training data remains an important issue. This paper handles the issue by knowledge transfer, specifically from both speech-generic and depression-specific knowledge. The paper first studies sequential knowledge transfer from generic foundation models pretrained on large amounts of speech and text data. A block-wise analysis is performed for AD diagnosis based on the representations extracted from different intermediate blocks of different foundation models. Apart from the knowledge from speech-generic representations, this paper also proposes to simultaneously transfer the knowledge from a speech depression detection task based on the high comorbidity rates of depression and AD. A parallel knowledge transfer framework is studied that jointly learns the information shared between these two tasks. Experimental results show that the proposed method improves AD and depression detection, and produces a state-of-the-art F1 score of 0.928 for AD diagnosis on the commonly used ADReSSo dataset.Comment: 8 pages, 4 figures. Accepted by ASRU 202
    • …
    corecore