97 research outputs found

    Integrating Language Models into Direct Speech Translation: An Inference-Time Solution to Control Gender Inflection

    Full text link
    When translating words referring to the speaker, speech translation (ST) systems should not resort to default masculine generics nor rely on potentially misleading vocal traits. Rather, they should assign gender according to the speakers' preference. The existing solutions to do so, though effective, are hardly feasible in practice as they involve dedicated model re-training on gender-labeled ST data. To overcome these limitations, we propose the first inference-time solution to control speaker-related gender inflections in ST. Our approach partially replaces the (biased) internal language model (LM) implicitly learned by the ST decoder with gender-specific external LMs. Experiments on en->es/fr/it show that our solution outperforms the base models and the best training-time mitigation strategy by up to 31.0 and 1.6 points in gender accuracy, respectively, for feminine forms. The gains are even larger (up to 32.0 and 3.4) in the challenging condition where speakers' vocal traits conflict with their gender.Comment: Accepted at EMNLP 202

    ์Œ์„ฑ์–ธ์–ด ์ดํ•ด์—์„œ์˜ ์ค‘์˜์„ฑ ํ•ด์†Œ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2022. 8. ๊น€๋‚จ์ˆ˜.์–ธ์–ด์˜ ์ค‘์˜์„ฑ์€ ํ•„์—ฐ์ ์ด๋‹ค. ๊ทธ๊ฒƒ์€ ์–ธ์–ด๊ฐ€ ์˜์‚ฌ ์†Œํ†ต์˜ ์ˆ˜๋‹จ์ด์ง€๋งŒ, ๋ชจ๋“  ์‚ฌ๋žŒ์ด ์ƒ๊ฐํ•˜๋Š” ์–ด๋–ค ๊ฐœ๋…์ด ์™„๋ฒฝํžˆ ๋™์ผํ•˜๊ฒŒ ์ „๋‹ฌ๋  ์ˆ˜ ์—†๋Š” ๊ฒƒ์— ๊ธฐ์ธํ•œ๋‹ค. ์ด๋Š” ํ•„์—ฐ์ ์ธ ์š”์†Œ์ด๊ธฐ๋„ ํ•˜์ง€๋งŒ, ์–ธ์–ด ์ดํ•ด์—์„œ ์ค‘์˜์„ฑ์€ ์ข…์ข… ์˜์‚ฌ ์†Œํ†ต์˜ ๋‹จ์ ˆ์ด๋‚˜ ์‹คํŒจ๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ๋„ ํ•œ๋‹ค. ์–ธ์–ด์˜ ์ค‘์˜์„ฑ์—๋Š” ๋‹ค์–‘ํ•œ ์ธต์œ„๊ฐ€ ์กด์žฌํ•œ๋‹ค. ํ•˜์ง€๋งŒ, ๋ชจ๋“  ์ƒํ™ฉ์—์„œ ์ค‘์˜์„ฑ์ด ํ•ด์†Œ๋  ํ•„์š”๋Š” ์—†๋‹ค. ํƒœ์Šคํฌ๋งˆ๋‹ค, ๋„๋ฉ”์ธ๋งˆ๋‹ค ๋‹ค๋ฅธ ์–‘์ƒ์˜ ์ค‘์˜์„ฑ์ด ์กด์žฌํ•˜๋ฉฐ, ์ด๋ฅผ ์ž˜ ์ •์˜ํ•˜๊ณ  ํ•ด์†Œ๋  ์ˆ˜ ์žˆ๋Š” ์ค‘์˜์„ฑ์ž„์„ ํŒŒ์•…ํ•œ ํ›„ ์ค‘์˜์ ์ธ ๋ถ€๋ถ„ ๊ฐ„์˜ ๊ฒฝ๊ณ„๋ฅผ ์ž˜ ์ •ํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•˜๋‹ค. ๋ณธ๊ณ ์—์„œ๋Š” ์Œ์„ฑ ์–ธ์–ด ์ฒ˜๋ฆฌ, ํŠนํžˆ ์˜๋„ ์ดํ•ด์— ์žˆ์–ด ์–ด๋–ค ์–‘์ƒ์˜ ์ค‘์˜์„ฑ์ด ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์•Œ์•„๋ณด๊ณ , ์ด๋ฅผ ํ•ด์†Œํ•˜๊ธฐ ์œ„ํ•œ ์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ํ˜„์ƒ์€ ๋‹ค์–‘ํ•œ ์–ธ์–ด์—์„œ ๋ฐœ์ƒํ•˜์ง€๋งŒ, ๊ทธ ์ •๋„ ๋ฐ ์–‘์ƒ์€ ์–ธ์–ด์— ๋”ฐ๋ผ์„œ ๋‹ค๋ฅด๊ฒŒ ๋‚˜ํƒ€๋‚˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹ค. ์šฐ๋ฆฌ์˜ ์—ฐ๊ตฌ์—์„œ ์ฃผ๋ชฉํ•˜๋Š” ๋ถ€๋ถ„์€, ์Œ์„ฑ ์–ธ์–ด์— ๋‹ด๊ธด ์ •๋ณด๋Ÿ‰๊ณผ ๋ฌธ์ž ์–ธ์–ด์˜ ์ •๋ณด๋Ÿ‰ ์ฐจ์ด๋กœ ์ธํ•ด ์ค‘์˜์„ฑ์ด ๋ฐœ์ƒํ•˜๋Š” ๊ฒฝ์šฐ๋“ค์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์šด์œจ(prosody)์— ๋”ฐ๋ผ ๋ฌธ์žฅ ํ˜•์‹ ๋ฐ ์˜๋„๊ฐ€ ๋‹ค๋ฅด๊ฒŒ ํ‘œํ˜„๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์€ ํ•œ๊ตญ์–ด๋ฅผ ๋Œ€์ƒ์œผ๋กœ ์ง„ํ–‰๋œ๋‹ค. ํ•œ๊ตญ์–ด์—์„œ๋Š” ๋‹ค์–‘ํ•œ ๊ธฐ๋Šฅ์ด ์žˆ๋Š”(multi-functionalํ•œ) ์ข…๊ฒฐ์–ด๋ฏธ(sentence ender), ๋นˆ๋ฒˆํ•œ ํƒˆ๋ฝ ํ˜„์ƒ(pro-drop), ์˜๋ฌธ์‚ฌ ๊ฐ„์„ญ(wh-intervention) ๋“ฑ์œผ๋กœ ์ธํ•ด, ๊ฐ™์€ ํ…์ŠคํŠธ๊ฐ€ ์—ฌ๋Ÿฌ ์˜๋„๋กœ ์ฝํžˆ๋Š” ํ˜„์ƒ์ด ๋ฐœ์ƒํ•˜๊ณค ํ•œ๋‹ค. ์ด๊ฒƒ์ด ์˜๋„ ์ดํ•ด์— ํ˜ผ์„ ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ๋‹ค๋Š” ๋ฐ์— ์ฐฉ์•ˆํ•˜์—ฌ, ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ์ค‘์˜์„ฑ์„ ๋จผ์ € ์ •์˜ํ•˜๊ณ , ์ค‘์˜์ ์ธ ๋ฌธ์žฅ๋“ค์„ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋ง๋ญ‰์น˜๋ฅผ ๊ตฌ์ถ•ํ•œ๋‹ค. ์˜๋„ ์ดํ•ด๋ฅผ ์œ„ํ•œ ๋ง๋ญ‰์น˜๋ฅผ ๊ตฌ์ถ•ํ•˜๋Š” ๊ณผ์ •์—์„œ ๋ฌธ์žฅ์˜ ์ง€ํ–ฅ์„ฑ(directivity)๊ณผ ์ˆ˜์‚ฌ์„ฑ(rhetoricalness)์ด ๊ณ ๋ ค๋œ๋‹ค. ์ด๊ฒƒ์€ ์Œ์„ฑ ์–ธ์–ด์˜ ์˜๋„๋ฅผ ์„œ์ˆ , ์งˆ๋ฌธ, ๋ช…๋ น, ์ˆ˜์‚ฌ์˜๋ฌธ๋ฌธ, ๊ทธ๋ฆฌ๊ณ  ์ˆ˜์‚ฌ๋ช…๋ น๋ฌธ์œผ๋กœ ๊ตฌ๋ถ„ํ•˜๊ฒŒ ํ•˜๋Š” ๊ธฐ์ค€์ด ๋œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๊ธฐ๋ก๋œ ์Œ์„ฑ ์–ธ์–ด(spoken language)๋ฅผ ์ถฉ๋ถ„ํžˆ ๋†’์€ ์ผ์น˜๋„(kappa = 0.85)๋กœ ์ฃผ์„ํ•œ ๋ง๋ญ‰์น˜๋ฅผ ์ด์šฉํ•ด, ์Œ์„ฑ์ด ์ฃผ์–ด์ง€์ง€ ์•Š์€ ์ƒํ™ฉ์—์„œ ์ค‘์˜์ ์ธ ํ…์ŠคํŠธ๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ๋ฐ์— ์–ด๋–ค ์ „๋žต ํ˜น์€ ์–ธ์–ด ๋ชจ๋ธ์ด ํšจ๊ณผ์ ์ธ๊ฐ€๋ฅผ ๋ณด์ด๊ณ , ํ•ด๋‹น ํƒœ์Šคํฌ์˜ ํŠน์ง•์„ ์ •์„ฑ์ ์œผ๋กœ ๋ถ„์„ํ•œ๋‹ค. ๋˜ํ•œ, ์šฐ๋ฆฌ๋Š” ํ…์ŠคํŠธ ์ธต์œ„์—์„œ๋งŒ ์ค‘์˜์„ฑ์— ์ ‘๊ทผํ•˜์ง€ ์•Š๊ณ , ์‹ค์ œ๋กœ ์Œ์„ฑ์ด ์ฃผ์–ด์ง„ ์ƒํ™ฉ์—์„œ ์ค‘์˜์„ฑ ํ•ด์†Œ(disambiguation)๊ฐ€ ๊ฐ€๋Šฅํ•œ์ง€๋ฅผ ์•Œ์•„๋ณด๊ธฐ ์œ„ํ•ด, ํ…์ŠคํŠธ๊ฐ€ ์ค‘์˜์ ์ธ ๋ฐœํ™”๋“ค๋งŒ์œผ๋กœ ๊ตฌ์„ฑ๋œ ์ธ๊ณต์ ์ธ ์Œ์„ฑ ๋ง๋ญ‰์น˜๋ฅผ ์„ค๊ณ„ํ•˜๊ณ  ๋‹ค์–‘ํ•œ ์ง‘์ค‘(attention) ๊ธฐ๋ฐ˜ ์‹ ๊ฒฝ๋ง(neural network) ๋ชจ๋ธ๋“ค์„ ์ด์šฉํ•ด ์ค‘์˜์„ฑ์„ ํ•ด์†Œํ•œ๋‹ค. ์ด ๊ณผ์ •์—์„œ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ํ†ต์‚ฌ์ /์˜๋ฏธ์  ์ค‘์˜์„ฑ ํ•ด์†Œ๊ฐ€ ์–ด๋– ํ•œ ๊ฒฝ์šฐ์— ๊ฐ€์žฅ ํšจ๊ณผ์ ์ธ์ง€ ๊ด€์ฐฐํ•˜๊ณ , ์ธ๊ฐ„์˜ ์–ธ์–ด ์ฒ˜๋ฆฌ์™€ ์–ด๋–ค ์—ฐ๊ด€์ด ์žˆ๋Š”์ง€์— ๋Œ€ํ•œ ๊ด€์ ์„ ์ œ์‹œํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋งˆ์ง€๋ง‰์œผ๋กœ, ์œ„์™€ ๊ฐ™์€ ์ ˆ์ฐจ๋กœ ์˜๋„ ์ดํ•ด ๊ณผ์ •์—์„œ์˜ ์ค‘์˜์„ฑ์ด ํ•ด์†Œ๋˜์—ˆ์„ ๊ฒฝ์šฐ, ์ด๋ฅผ ์–ด๋–ป๊ฒŒ ์‚ฐ์—…๊ณ„ ํ˜น์€ ์—ฐ๊ตฌ ๋‹จ์—์„œ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š”๊ฐ€์— ๋Œ€ํ•œ ๊ฐ„๋žตํ•œ ๋กœ๋“œ๋งต์„ ์ œ์‹œํ•œ๋‹ค. ํ…์ŠคํŠธ์— ๊ธฐ๋ฐ˜ํ•œ ์ค‘์˜์„ฑ ํŒŒ์•…๊ณผ ์Œ์„ฑ ๊ธฐ๋ฐ˜์˜ ์˜๋„ ์ดํ•ด ๋ชจ๋“ˆ์„ ํ†ตํ•ฉํ•œ๋‹ค๋ฉด, ์˜ค๋ฅ˜์˜ ์ „ํŒŒ๋ฅผ ์ค„์ด๋ฉด์„œ๋„ ํšจ์œจ์ ์œผ๋กœ ์ค‘์˜์„ฑ์„ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋Š” ์‹œ์Šคํ…œ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์„ ๊ฒƒ์ด๋‹ค. ์ด๋Ÿฌํ•œ ์‹œ์Šคํ…œ์€ ๋Œ€ํ™” ๋งค๋‹ˆ์ €(dialogue manager)์™€ ํ†ตํ•ฉ๋˜์–ด ๊ฐ„๋‹จํ•œ ๋Œ€ํ™”(chit-chat)๊ฐ€ ๊ฐ€๋Šฅํ•œ ๋ชฉ์  ์ง€ํ–ฅ ๋Œ€ํ™” ์‹œ์Šคํ…œ(task-oriented dialogue system)์„ ๊ตฌ์ถ•ํ•  ์ˆ˜๋„ ์žˆ๊ณ , ๋‹จ์ผ ์–ธ์–ด ์กฐ๊ฑด(monolingual condition)์„ ๋„˜์–ด ์Œ์„ฑ ๋ฒˆ์—ญ์—์„œ์˜ ์—๋Ÿฌ๋ฅผ ์ค„์ด๋Š” ๋ฐ์— ํ™œ์šฉ๋  ์ˆ˜๋„ ์žˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋ณธ๊ณ ๋ฅผ ํ†ตํ•ด, ์šด์œจ์— ๋ฏผ๊ฐํ•œ(prosody-sensitive) ์–ธ์–ด์—์„œ ์˜๋„ ์ดํ•ด๋ฅผ ์œ„ํ•œ ์ค‘์˜์„ฑ ํ•ด์†Œ๊ฐ€ ๊ฐ€๋Šฅํ•˜๋ฉฐ, ์ด๋ฅผ ์‚ฐ์—… ๋ฐ ์—ฐ๊ตฌ ๋‹จ์—์„œ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ด๊ณ ์ž ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๊ฐ€ ๋‹ค๋ฅธ ์–ธ์–ด ๋ฐ ๋„๋ฉ”์ธ์—์„œ๋„ ๊ณ ์งˆ์ ์ธ ์ค‘์˜์„ฑ ๋ฌธ์ œ๋ฅผ ํ•ด์†Œํ•˜๋Š” ๋ฐ์— ๋„์›€์ด ๋˜๊ธธ ๋ฐ”๋ผ๋ฉฐ, ์ด๋ฅผ ์œ„ํ•ด ์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ•˜๋Š” ๋ฐ์— ํ™œ์šฉ๋œ ๋ฆฌ์†Œ์Šค, ๊ฒฐ๊ณผ๋ฌผ ๋ฐ ์ฝ”๋“œ๋“ค์„ ๊ณต์œ ํ•จ์œผ๋กœ์จ ํ•™๊ณ„์˜ ๋ฐœ์ „์— ์ด๋ฐ”์ง€ํ•˜๊ณ ์ž ํ•œ๋‹ค.Ambiguity in the language is inevitable. It is because, albeit language is a means of communication, a particular concept that everyone thinks of cannot be conveyed in a perfectly identical manner. As this is an inevitable factor, ambiguity in language understanding often leads to breakdown or failure of communication. There are various hierarchies of language ambiguity. However, not all ambiguity needs to be resolved. Different aspects of ambiguity exist for each domain and task, and it is crucial to define the boundary after recognizing the ambiguity that can be well-defined and resolved. In this dissertation, we investigate the types of ambiguity that appear in spoken language processing, especially in intention understanding, and conduct research to define and resolve it. Although this phenomenon occurs in various languages, its degree and aspect depend on the language investigated. The factor we focus on is cases where the ambiguity comes from the gap between the amount of information in the spoken language and the text. Here, we study the Korean language, which often shows different sentence structures and intentions depending on the prosody. In the Korean language, a text is often read with multiple intentions due to multi-functional sentence enders, frequent pro-drop, wh-intervention, etc. We first define this type of ambiguity and construct a corpus that helps detect ambiguous sentences, given that such utterances can be problematic for intention understanding. In constructing a corpus for intention understanding, we consider the directivity and rhetoricalness of a sentence. They make up a criterion for classifying the intention of spoken language into a statement, question, command, rhetorical question, and rhetorical command. Using the corpus annotated with sufficiently high agreement on a spoken language corpus, we show that colloquial corpus-based language models are effective in classifying ambiguous text given only textual data, and qualitatively analyze the characteristics of the task. We do not handle ambiguity only at the text level. To find out whether actual disambiguation is possible given a speech input, we design an artificial spoken language corpus composed only of ambiguous sentences, and resolve ambiguity with various attention-based neural network architectures. In this process, we observe that the ambiguity resolution is most effective when both textual and acoustic input co-attends each feature, especially when the audio processing module conveys attention information to the text module in a multi-hop manner. Finally, assuming the case that the ambiguity of intention understanding is resolved by proposed strategies, we present a brief roadmap of how the results can be utilized at the industry or research level. By integrating text-based ambiguity detection and speech-based intention understanding module, we can build a system that handles ambiguity efficiently while reducing error propagation. Such a system can be integrated with dialogue managers to make up a task-oriented dialogue system capable of chit-chat, or it can be used for error reduction in multilingual circumstances such as speech translation, beyond merely monolingual conditions. Throughout the dissertation, we want to show that ambiguity resolution for intention understanding in prosody-sensitive language can be achieved and can be utilized at the industry or research level. We hope that this study helps tackle chronic ambiguity issues in other languages โ€‹โ€‹or other domains, linking linguistic science and engineering approaches.1 Introduction 1 1.1 Motivation 2 1.2 Research Goal 4 1.3 Outline of the Dissertation 5 2 Related Work 6 2.1 Spoken Language Understanding 6 2.2 Speech Act and Intention 8 2.2.1 Performatives and statements 8 2.2.2 Illocutionary act and speech act 9 2.2.3 Formal semantic approaches 11 2.3 Ambiguity of Intention Understanding in Korean 14 2.3.1 Ambiguities in language 14 2.3.2 Speech act and intention understanding in Korean 16 3 Ambiguity in Intention Understanding of Spoken Language 20 3.1 Intention Understanding and Ambiguity 20 3.2 Annotation Protocol 23 3.2.1 Fragments 24 3.2.2 Clear-cut cases 26 3.2.3 Intonation-dependent utterances 28 3.3 Data Construction . 32 3.3.1 Source scripts 32 3.3.2 Agreement 32 3.3.3 Augmentation 33 3.3.4 Train split 33 3.4 Experiments and Results 34 3.4.1 Models 34 3.4.2 Implementation 36 3.4.3 Results 37 3.5 Findings and Summary 44 3.5.1 Findings 44 3.5.2 Summary 45 4 Disambiguation of Speech Intention 47 4.1 Ambiguity Resolution 47 4.1.1 Prosody and syntax 48 4.1.2 Disambiguation with prosody 50 4.1.3 Approaches in SLU 50 4.2 Dataset Construction 51 4.2.1 Script generation 52 4.2.2 Label tagging 54 4.2.3 Recording 56 4.3 Experiments and Results 57 4.3.1 Models 57 4.3.2 Results 60 4.4 Summary 63 5 System Integration and Application 65 5.1 System Integration for Intention Identification 65 5.1.1 Proof of concept 65 5.1.2 Preliminary study 69 5.2 Application to Spoken Dialogue System 75 5.2.1 What is 'Free-running' 76 5.2.2 Omakase chatbot 76 5.3 Beyond Monolingual Approaches 84 5.3.1 Spoken language translation 85 5.3.2 Dataset 87 5.3.3 Analysis 94 5.3.4 Discussion 95 5.4 Summary 100 6 Conclusion and Future Work 103 Bibliography 105 Abstract (In Korean) 124 Acknowledgment 126๋ฐ•

    Gender bias in natural language processing

    Get PDF
    (English) Gender bias is a dangerous form of social bias impacting an essential group of people. The effect of gender bias is propagated to our data, causing the accuracy of the predictions in models to be different depending on gender. In the deep learning era, our models are highly impacted by the training data transferring the negative biases in the data to the models. Natural Language Processing models encounter this amplification of bias in the data. Our thesis is devoted to studying the issue of gender bias in NLP applications from different points of view. To understand and manage the effect of bias amplification, evaluation and mitigation approaches have to be explored. The scientific society has exerted significant efforts in these two directions to enable proposing solutions to the problem. Our thesis is devoted to these two main directions; proposing evaluation schemes, whether as datasets or mechanisms, besides suggesting mitigation techniques. For evaluation, we proposed techniques for evaluating bias in contextualized embeddings and multilingual translation models. Besides, we presented benchmarks for evaluating bias for speech translation and multilingual machine translation models. For mitigation direction, we proposed different approaches in machine translation models by adding contextual text, contextual embeddings, or relaxing the architectureโ€™s constraints. Our evaluation studies concluded that gender bias is encoded strongly in contextual embeddings representing professions and stereotypical nouns. We also unveiled that algorithms amplify the bias and that the systemโ€™s architecture impacts the behavior. For the evaluation purposes, we contributed to creating several benchmarks for the evaluation purpose; we introduced a benchmark that evaluates gender bias in speech translation systems. This research suggests that the current state of speech translation systems does not enable us to evaluate gender bias accurately because of the low quality of speech translation systems. Additionally, we proposed a toolkit for building multilingual balanced datasets for training and evaluating NMT models. These datasets are balanced within the gender occupation-wise. We found out that high-resource languages usually tend to predict more precise male translations. Our mitigation studies in NMT suggest that the nature of datasets and languages needs to be considered to apply the right approach. Mitigating bias can rely on adding contextual information. However, in other cases, we need to rethink the model and relax some influencing conditions to the bias that do not affect the general performance but reduce the effect of bias amplification.(Espaรฑol) El prejuicio de gรฉnero es una forma peligrosa de sesgo social que afecta a un grupo esencial de personas. El efecto del prejuicio de gรฉnero se propaga a nuestros datos, lo que hace quela precisiรณn de las predicciones en los modelos sea diferente segรบn el gรฉnero. En la era del aprendizaje profundo, nuestros modelos se ven afectados por los datos de entrenamiento que transfieren los prejuicios de los datos a los modelos. Los modelos de procesamiento del lenguaje natural pueden ademรกs amplificar este sesgo en los datos. Para comprender el efecto de la amplificaciรณn del prejuicio de gรฉnero, se deben explorar enfoques de evaluaciรณn y mitigaciรณn. La sociedad cientรญfica ha visto la importancรญa de estas dos direcciones para posibilitar la propuesta de soluciones al problema. Nuestra tesis estรก dedicada a estas dos direcciones principales; proponiendo esquemas de evaluaciรณn, ya sea como conjuntos de datos y mecanismos de evaluaciรณn, ademรกs de sugerir tรฉcnicas de mitigaciรณn. Para la evaluaciรณn, propusimos tรฉcnicas para evaluar el prejuicio en representaciones vectoriales contextualizadas y modelos de traducciรณn multilingรผe. Ademรกs, presentamos puntos de referencia para evaluar el prejuicio de la traducciรณn de voz y los modelos de traducciรณn automรกtica multilingรผe. Para la direcciรณn de mitigaciรณn, propusimos diferentes enfoques en los modelos de traducciรณn automรกtica agregando texto contextual, incrustaciones contextuales o relajando las restricciones de la arquitectura. Nuestros estudios de evaluaciรณn concluyeron que el prejuicio de gรฉnero estรก fuertemente codificado en representaciones vectoriales contextuales que representan profesiones y sustantivos estereotipados. Tambiรฉn revelamos que los algoritmos amplifican el sesgo y que la arquitectura del sistema afecta el comportamiento. Para efectos de evaluaciรณn, contribuimos a la creaciรณn de varios datos de referencia para fines de evaluaciรณn; presentamos un conjunto de datos que evalรบa el sesgo de gรฉnero en los sistemas de traducciรณn de voz. Esta investigaciรณn sugiere que el estado actual de los sistemas de traducciรณn del habla no nos permite evaluar con precisiรณn el sesgo de gรฉnero debido a la baja calidad de los sistemas de traducciรณn del habla. Ademรกs, propusimos un conjunto de herramientas para construir conjuntos de datos equilibrados multilingรผes para entrenar y evaluar modelos NMT. Estos conjuntos de datos estรกn equilibrados dentro de la ocupaciรณn de gรฉnero. Descubrimos que los idiomas con muchos recursos generalmente tienden a predecir traducciones masculinas mรกs precisas. Nuestros estudios de mitigaciรณn en NMT sugieren que se debe considerar la naturaleza de los conjuntos de datos y los idiomas para aplicar el enfoque correcto. La mitigaciรณn del sesgo puede basarse en agregar informaciรณn contextual. Sin embargo, en otros casos, necesitamos repensar el modelo y relajar algunas condiciones que influyen en el sesgo que no afectan el rendimiento general pero reducen el efecto de la amplificaciรณn del sesgo.Postprint (published version

    A Review of Deep Learning Techniques for Speech Processing

    Full text link
    The field of speech processing has undergone a transformative shift with the advent of deep learning. The use of multiple processing layers has enabled the creation of models capable of extracting intricate features from speech data. This development has paved the way for unparalleled advancements in speech recognition, text-to-speech synthesis, automatic speech recognition, and emotion recognition, propelling the performance of these tasks to unprecedented heights. The power of deep learning techniques has opened up new avenues for research and innovation in the field of speech processing, with far-reaching implications for a range of industries and applications. This review paper provides a comprehensive overview of the key deep learning models and their applications in speech-processing tasks. We begin by tracing the evolution of speech processing research, from early approaches, such as MFCC and HMM, to more recent advances in deep learning architectures, such as CNNs, RNNs, transformers, conformers, and diffusion models. We categorize the approaches and compare their strengths and weaknesses for solving speech-processing tasks. Furthermore, we extensively cover various speech-processing tasks, datasets, and benchmarks used in the literature and describe how different deep-learning networks have been utilized to tackle these tasks. Additionally, we discuss the challenges and future directions of deep learning in speech processing, including the need for more parameter-efficient, interpretable models and the potential of deep learning for multimodal speech processing. By examining the field's evolution, comparing and contrasting different approaches, and highlighting future directions and challenges, we hope to inspire further research in this exciting and rapidly advancing field

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Get PDF
    The eighth edition of the Italian Conference on Computational Linguistics (CLiC-it 2021) was held at Universitร  degli Studi di Milano-Bicocca from 26th to 28th January 2022. After the edition of 2020, which was held in fully virtual mode due to the health emergency related to Covid-19, CLiC-it 2021 represented the first moment for the Italian research community of Computational Linguistics to meet in person after more than one year of full/partial lockdown

    Robust learning of acoustic representations from diverse speech data

    Get PDF
    Automatic speech recognition is increasingly applied to new domains. A key challenge is to robustly learn, update and maintain representations to cope with transient acoustic conditions. A typical example is broadcast media, for which speakers and environments may change rapidly, and available supervision may be poor. The concern of this thesis is to build and investigate methods for acoustic modelling that are robust to the characteristics and transient conditions as embodied by such media. The first contribution of the thesis is a technique to make use of inaccurate transcriptions as supervision for acoustic model training. There is an abundance of audio with approximate labels, but training methods can be sensitive to label errors, and their use is therefore not trivial. State-of-the-art semi-supervised training makes effective use of a lattice of supervision, inherently encoding uncertainty in the labels to avoid overfitting to poor supervision, but does not make use of the transcriptions. Existing approaches that do aim to make use of the transcriptions typically employ an algorithm to filter or combine the transcriptions with the recognition output from a seed model, but the final result does not encode uncertainty. We propose a method to combine the lattice output from a biased recognition pass with the transcripts, crucially preserving uncertainty in the lattice where appropriate. This substantially reduces the word error rate on a broadcast task. The second contribution is a method to factorise representations for speakers and environments so that they may be combined in novel combinations. In realistic scenarios, the speaker or environment transform at test time might be unknown, or there may be insufficient data to learn a joint transform. We show that in such cases, factorised, or independent, representations are required to avoid deteriorating performance. Using i-vectors, we factorise speaker or environment information using multi-condition training with neural networks. Specifically, we extract bottleneck features from networks trained to classify either speakers or environments. The resulting factorised representations prove beneficial when one factor is missing at test time, or when all factors are seen, but not in the desired combination. The third contribution is an investigation of model adaptation in a longitudinal setting. In this scenario, we repeatedly adapt a model to new data, with the constraint that previous data becomes unavailable. We first demonstrate the effect of such a constraint, and show that using a cyclical learning rate may help. We then observe that these successive models lend themselves well to ensembling. Finally, we show that the impact of this constraint in an active learning setting may be detrimental to performance, and suggest to combine active learning with semi-supervised training to avoid biasing the model. The fourth contribution is a method to adapt low-level features in a parameter-efficient and interpretable manner. We propose to adapt the filters in a neural feature extractor, known as SincNet. In contrast to traditional techniques that warp the filterbank frequencies in standard feature extraction, adapting SincNet parameters is more flexible and more readily optimised, whilst maintaining interpretability. On a task adapting from adult to child speech, we show that this layer is well suited for adaptation and is very effective with respect to the small number of adapted parameters

    Revista da Associaรงรฃo Portuguesa de Linguรญstica

    Get PDF
    N.ยบ 10 (2023) da Revista da Associaรงรฃo Portuguesa de Linguรญsticainfo:eu-repo/semantics/publishedVersio

    ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ƒ์„ฑ ๋ชจ๋ธ์„ ์ด์šฉํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€,2020. 2. ์ด์ƒ๊ตฌ.Recent advances in generation capability of deep learning models have spurred interest in utilizing deep generative models for unsupervised generative data augmentation (GDA). Generative data augmentation aims to improve the performance of a downstream machine learning model by augmenting the original dataset with samples generated from a deep latent variable model. This data augmentation approach is attractive to the natural language processing community, because (1) there is a shortage of text augmentation techniques that require little supervision and (2) resource scarcity being prevalent. In this dissertation, we explore the feasibility of exploiting deep latent variable models for data augmentation on three NLP tasks: sentence classification, spoken language understanding (SLU) and dialogue state tracking (DST), represent NLP tasks of various complexities and properties -- SLU requires multi-task learning of text classification and sequence tagging, while DST requires the understanding of hierarchical and recurrent data structures. For each of the three tasks, we propose a task-specific latent variable model based on conditional, hierarchical and sequential variational autoencoders (VAE) for multi-modal joint modeling of linguistic features and the relevant annotations. We conduct extensive experiments to statistically justify our hypothesis that deep generative data augmentation is beneficial for all subject tasks. Our experiments show that deep generative data augmentation is effective for the select tasks, supporting the idea that the technique can potentially be utilized for other range of NLP tasks. Ablation and qualitative studies reveal deeper insight into the underlying mechanisms of generative data augmentation. As a secondary contribution, we also shed light onto the recurring posterior collapse phenomenon in autoregressive VAEs and, subsequently, propose novel techniques to reduce the model risk, which is crucial for proper training of complex VAE models, enabling them to synthesize better samples for data augmentation. In summary, this work intends to demonstrate and analyze the effectiveness of unsupervised generative data augmentation in NLP. Ultimately, our approach enables standardized adoption of generative data augmentation, which can be applied orthogonally to existing regularization techniques.์ตœ๊ทผ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ƒ์„ฑ ๋ชจ๋ธ์˜ ๊ธ‰๊ฒฉํ•œ ๋ฐœ์ „์œผ๋กœ ์ด๋ฅผ ์ด์šฉํ•œ ์ƒ์„ฑ ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•(generative data augmentation, GDA)์˜ ์‹คํ˜„ ๊ฐ€๋Šฅ์„ฑ์— ๋Œ€ํ•œ ๊ธฐ๋Œ€๊ฐ€ ์ปค์ง€๊ณ  ์žˆ๋‹ค. ์ƒ์„ฑ ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์€ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ž ์žฌ๋ณ€์ˆ˜ ๋ชจ๋ธ์—์„œ ์ƒ์„ฑ ๋œ ์ƒ˜ํ”Œ์„ ์›๋ณธ ๋ฐ์ดํ„ฐ์…‹์— ์ถ”๊ฐ€ํ•˜์—ฌ ์—ฐ๊ด€๋œ ํƒœ์Šคํฌ์˜ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๊ธฐ์ˆ ์„ ์˜๋ฏธํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ์ƒ์„ฑ ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์€ ๋ฐ์ดํ„ฐ ๊ณต๊ฐ„์—์„œ ์ด๋ค„์ง€๋Š” ์ •๊ทœํ™” ๊ธฐ์ˆ ์˜ ํ•œ ํ˜•ํƒœ๋กœ ๊ฐ„์ฃผ๋  ์ˆ˜ ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ƒ์„ฑ ๋ชจ๋ธ์˜ ์ƒˆ๋กœ์šด ํ™œ์šฉ ๊ฐ€๋Šฅ์„ฑ์€ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ถ„์•ผ์—์„œ ๋”์šฑ ์ค‘์š”ํ•˜๊ฒŒ ๋ถ€๊ฐ๋˜๋Š” ์ด์œ ๋Š” (1) ๋ฒ”์šฉ ๊ฐ€๋Šฅํ•œ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ์ˆ ์˜ ๋ถ€์žฌ์™€ (2) ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์˜ ํฌ์†Œ์„ฑ์„ ๊ทน๋ณตํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€์•ˆ์ด ํ•„์š”ํ•˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๋ฌธ์ œ์˜ ๋ณต์žก๋„์™€ ํŠน์ง•์„ ๊ณจ๊ณ ๋ฃจ ์ฑ„์ง‘ํ•˜๊ธฐ ์œ„ํ•ด ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ํ…์ŠคํŠธ ๋ถ„๋ฅ˜(text classification), ์ˆœ์ฐจ์  ๋ ˆ์ด๋ธ”๋ง๊ณผ ๋ฉ€ํ‹ฐํƒœ์Šคํ‚น ๊ธฐ์ˆ ์ด ํ•„์š”ํ•œ ๋ฐœํ™” ์ดํ•ด(spoken language understanding, SLU), ๊ณ„์ธต์ ์ด๋ฉฐ ์žฌ๊ท€์ ์ธ ๋ฐ์ดํ„ฐ ๊ตฌ์กฐ์— ๋Œ€ํ•œ ๊ณ ๋ ค๊ฐ€ ํ•„์š”ํ•œ ๋Œ€ํ™” ์ƒํƒœ ์ถ”์ (dialogue state tracking, DST) ๋“ฑ ์„ธ ๊ฐ€์ง€ ๋ฌธ์ œ์—์„œ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ƒ์„ฑ ๋ชจ๋ธ์„ ํ™œ์šฉํ•œ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์˜ ํƒ€๋‹น์„ฑ์— ๋Œ€ํ•ด ๋‹ค๋ฃฌ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์กฐ๊ฑด๋ถ€, ๊ณ„์ธต์  ๋ฐ ์ˆœ์ฐจ์  variational autoencoder (VAE)์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ๊ฐ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ฌธ์ œ์— ํŠนํ™”๋œ ํ…์ŠคํŠธ ๋ฐ ์—ฐ๊ด€ ๋ถ€์ฐฉ ์ •๋ณด๋ฅผ ๋™์‹œ์— ์ƒ์„ฑํ•˜๋Š” ํŠน์ˆ˜ ๋”ฅ๋Ÿฌ๋‹ ์ƒ์„ฑ ๋ชจ๋ธ๋“ค์„ ์ œ์‹œํ•˜๊ณ , ๋‹ค์–‘ํ•œ ํ•˜๋ฅ˜ ๋ชจ๋ธ๊ณผ ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค๋ฃจ๋Š” ๋“ฑ ํญ ๋„“์€ ์‹คํ—˜์„ ํ†ตํ•ด ๋”ฅ ์ƒ์„ฑ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์˜ ํšจ๊ณผ๋ฅผ ํ†ต๊ณ„์ ์œผ๋กœ ์ž…์ฆํ•˜์˜€๋‹ค. ๋ถ€์ˆ˜์  ์—ฐ๊ตฌ์—์„œ๋Š” ์ž๊ธฐํšŒ๊ท€์ (autoregressive) VAE์—์„œ ๋นˆ๋ฒˆํžˆ ๋ฐœ์ƒํ•˜๋Š” posterior collapse ๋ฌธ์ œ์— ๋Œ€ํ•ด ํƒ๊ตฌํ•˜๊ณ , ํ•ด๋‹น ๋ฌธ์ œ๋ฅผ ์™„ํ™”ํ•  ์ˆ˜ ์žˆ๋Š” ์‹ ๊ทœ ๋ฐฉ์•ˆ๋„ ์ œ์•ˆํ•œ๋‹ค. ํ•ด๋‹น ๋ฐฉ๋ฒ•์„ ์ƒ์„ฑ์  ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์— ํ•„์š”ํ•œ ๋ณต์žกํ•œ VAE ๋ชจ๋ธ์— ์ ์šฉํ•˜์˜€์„ ๋•Œ, ์ƒ์„ฑ ๋ชจ๋ธ์˜ ์ƒ์„ฑ ์งˆ์ด ํ–ฅ์ƒ๋˜์–ด ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ํšจ๊ณผ์—๋„ ๊ธ์ •์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ์Œ์„ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ๋ณธ ๋…ผ๋ฌธ์„ ํ†ตํ•ด ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ถ„์•ผ์—์„œ ๊ธฐ์กด ์ •๊ทœํ™” ๊ธฐ๋ฒ•๊ณผ ๋ณ‘ํ–‰ ์ ์šฉ ๊ฐ€๋Šฅํ•œ ๋น„์ง€๋„ ํ˜•ํƒœ์˜ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์˜ ํ‘œ์ค€ํ™”๋ฅผ ๊ธฐ๋Œ€ํ•ด ๋ณผ ์ˆ˜ ์žˆ๋‹ค.1 Introduction 1 1.1 Motivation 1 1.2 Dissertation Overview 6 2 Background and Related Work 8 2.1 Deep Latent Variable Models 8 2.1.1 Variational Autoencoder (VAE) 10 2.1.2 Deep Generative Models and Text Generation 12 2.2 Data Augmentation 12 2.2.1 General Description 13 2.2.2 Categorization of Data Augmentation 14 2.2.3 Theoretical Explanations 21 2.3 Summary 24 3 Basic Task: Text Classi cation 25 3.1 Introduction 25 3.2 Our Approach 28 3.2.1 Proposed Models 28 3.2.2 Training with I-VAE 29 3.3 Experiments 31 3.3.1 Datasets 32 3.3.2 Experimental Settings 33 3.3.3 Implementation Details 34 3.3.4 Data Augmentation Results 36 3.3.5 Ablation Studies 39 3.3.6 Qualitative Analysis 40 3.4 Summary 45 4 Multi-task Learning: Spoken Language Understanding 46 4.1 Introduction 46 4.2 Related Work 48 4.3 Model Description 48 4.3.1 Framework Formulation 48 4.3.2 Joint Generative Model 49 4.4 Experiments 56 4.4.1 Datasets 56 4.4.2 Experimental Settings 57 4.4.3 Generative Data Augmentation Results 61 4.4.4 Comparison to Other State-of-the-art Results 63 4.4.5 Ablation Studies 63 4.5 Summary 67 5 Complex Data: Dialogue State Tracking 68 5.1 Introduction 68 5.2 Background and Related Work 70 5.2.1 Task-oriented Dialogue 70 5.2.2 Dialogue State Tracking 72 5.2.3 Conversation Modeling 72 5.3 Variational Hierarchical Dialogue Autoencoder (VHDA) 73 5.3.1 Notations 73 5.3.2 Variational Hierarchical Conversational RNN 74 5.3.3 Proposed Model 75 5.3.4 Posterior Collapse 82 5.4 Experimental Results 84 5.4.1 Experimental Settings 84 5.4.2 Data Augmentation Results 90 5.4.3 Intrinsic Evaluation - Language Evaluation 94 5.4.4 Qualitative Results 95 5.5 Summary 101 6 Conclusion 103 6.1 Summary 103 6.2 Limitations 104 6.3 Future Work 105Docto

    Viseme-based Lip-Reading using Deep Learning

    Get PDF
    Research in Automated Lip Reading is an incredibly rich discipline with so many facets that have been the subject of investigation including audio-visual data, feature extraction, classification networks and classification schemas. The most advanced and up-to-date lip-reading systems can predict entire sentences with thousands of different words and the majority of them use ASCII characters as the classification schema. The classification performance of such systems however has been insufficient and the need to cover an ever expanding range of vocabulary using as few classes as possible is challenge. The work in this thesis contributes to the area concerning classification schemas by proposing an automated lip reading model that predicts sentences using visemes as a classification schema. This is an alternative schema to using ASCII characters, which is the conventional class system used to predict sentences. This thesis provides a review of the current trends in deep learning- based automated lip reading and analyses a gap in the research endeavours of automated lip-reading by contributing towards work done in the region of classification schema. A whole new line of research is opened up whereby an alternative way to do lip-reading is explored and in doing so, lip-reading performance results for predicting s entences from a benchmark dataset are attained which improve upon the current state-of-the-art. In this thesis, a neural network-based lip reading system is proposed. The system is lexicon-free and uses purely visual cues. With only a limited number of visemes as classes to recognise, the system is designed to lip read sentences covering a wide range of vocabulary and to recognise words that may not be included in system training. The lip-reading system predicts sentences as a two-stage procedure with visemes being recognised as the first stage and words being classified as the second stage. This is such that the second-stage has to both overcome the one-to-many mapping problem posed in lip-reading where one set of visemes can map to several words, and the problem of visemes being confused or misclassified to begin with. To develop the proposed lip-reading system, a number of tasks have been performed in this thesis. These include the classification of continuous sequences of visemes; and the proposal of viseme-to-word conversion models that are both effective in their conversion performance of predicting words, and robust to the possibility of viseme confusion or misclassification. The initial system reported has been testified on the challenging BBC Lip Reading Sentences 2 (LRS2) benchmark dataset attaining a word accuracy rate of 64.6%. Compared with the state-of-the-art works in lip reading sentences reported at the time, the system had achieved a significantly improved performance. The lip reading system is further improved upon by using a language model that has been demonstrated to be effective at discriminating between homopheme words and being robust to incorrectly classified visemes. An improved performance in predicting spoken sentences from the LRS2 dataset is yielded with an attained word accuracy rate of 79.6% which is still better than another lip-reading system trained and evaluated on the the same dataset that attained a word accuracy rate 77.4% and it is to the best of our knowledge the next best observed result attained on LRS2
    • โ€ฆ
    corecore