113 research outputs found

    Controlling Personality-Based Stylistic Variation with Neural Natural Language Generators

    Full text link
    Natural language generators for task-oriented dialogue must effectively realize system dialogue actions and their associated semantics. In many applications, it is also desirable for generators to control the style of an utterance. To date, work on task-oriented neural generation has primarily focused on semantic fidelity rather than achieving stylistic goals, while work on style has been done in contexts where it is difficult to measure content preservation. Here we present three different sequence-to-sequence models and carefully test how well they disentangle content and style. We use a statistical generator, Personage, to synthesize a new corpus of over 88,000 restaurant domain utterances whose style varies according to models of personality, giving us total control over both the semantic content and the stylistic variation in the training data. We then vary the amount of explicit stylistic supervision given to the three models. We show that our most explicit model can simultaneously achieve high fidelity to both semantic and stylistic goals: this model adds a context vector of 36 stylistic parameters as input to the hidden state of the encoder at each time step, showing the benefits of explicit stylistic supervision, even when the amount of training data is large.Comment: To appear at SIGDIAL 201

    Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009)

    Get PDF

    Designing for Autonomy, Competence and Relatedness in Robot-Assisted Language Learning

    Get PDF
    The current number of immigrants has risen quickly in recent years due to globalization. People move to another country for economic, educational, emotional, and other reasons. As a result, immigrants need to learn the host language to integrate into their new living environment. However, the process of learning the host language for adult immigrants faces many challenges. Among those challenges, maintaining intrinsic motivation is critical for a long-term language study process and the well-being of adult immigrants. Self-Determination Theory (SDT) is a popular theoretical framework that explains human motivation, especially intrinsic motivation, through a psychological approach to understand its nature. According to SDT, humans are intrinsically motivated through the satisfaction of the three basic needs of Autonomy, Competence, and Relatedness. Many researchers have applied the theory to different topics and directions, including language learning. On the other hand, social robots have been used extensively in the language learning context due to their physical embodiments and the application of artificial intelligence in robotics. Furthermore, research has proven that social robots can create a relaxed and engaging learning environment, thus motivating language learners. The thesis designs and implements a RALL application called SAMQ using QTrobot, a humanoid social robot capable of producing body gestures, displaying different facial expressions, and multilingual communication. The study aims to investigate SAMQโ€™s ability to evoke intrinsic motivations of adult immigrants in learning the Finnish language. While previous research focuses on English as the second language (L2) and targets children, this thesisโ€™s L2 is Finnish, and the learners are adult immigrants. The thesis conducts semi-structured interviews during the Pre-study phase (N=6) to gather real insights from adult immigrants living in Finland, to understand demotivating factors in their language learning experience and the unsatisfied aspects of the three basic needs. The qualitative findings from the Pre-study contribute to the design and implementation of two versions of SAMQ, aiming at evoking intrinsic motivations through satisfying unmet needs. The first version is a Quiz-only program that tests several assumptions regarding human-robot interaction (HRI). The final version of SAMQ is a more comprehensive language learning application that supports two modes of study: Learning and Quizzes. It consists of multiple modifications that address all adult immigrantsโ€™ basic needs while additionally promoting intrinsic motivation through media. The final Evaluation of SAMQ (N=6) includes a questionnaire and a semi-structured interview. The quantitative results of the questionnaire validated the ability of using social robots to evoke adult learnersโ€™ intrinsic motivation in the RALL context. The qualitative findings from the research high-light the importance of social robotsโ€™ physical embodiments in eliciting intrinsic motivation for adult learners through satisfying Relatedness. In addition, the use of voice modality creates a genuine HRI for adult learners, fulfilling both Autonomy and Competence, resulting in an engaging and smooth learning experience. Besides that, the use of adult learnersโ€™ L1 plays a crucial role in facilitating a relaxed and familiar learning environment, supplying both Competence and Relatedness. Moreover, multimedia learning materials make the learning experience more vivid and attractive. Ultimately, the result shows that accessibility and flexibility are essential attributes for adult learners to maintain their motivation for long-term language study through the satisfaction of Autonomy. Finally, the thesis proposes a design guideline for the RALL context. It consists of five design implications for evoking intrinsic motivation in adult learners through satisfying the three basic psychological needs of Autonomy, Competence, and Relatedness. The design guideline acts as a proposal for future design and implementation of RALL programs for adults and contributes to developing the human-robot interaction field

    Interactive Hesitation Synthesis: Modelling and Evaluation

    Get PDF
    Betz S, Carlmeyer B, Wagner P, Wrede B. Interactive Hesitation Synthesis: Modelling and Evaluation. Multimodal Technologies and Interaction. 2018;2(1): 9

    ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ƒ์„ฑ ๋ชจ๋ธ์„ ์ด์šฉํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€,2020. 2. ์ด์ƒ๊ตฌ.Recent advances in generation capability of deep learning models have spurred interest in utilizing deep generative models for unsupervised generative data augmentation (GDA). Generative data augmentation aims to improve the performance of a downstream machine learning model by augmenting the original dataset with samples generated from a deep latent variable model. This data augmentation approach is attractive to the natural language processing community, because (1) there is a shortage of text augmentation techniques that require little supervision and (2) resource scarcity being prevalent. In this dissertation, we explore the feasibility of exploiting deep latent variable models for data augmentation on three NLP tasks: sentence classification, spoken language understanding (SLU) and dialogue state tracking (DST), represent NLP tasks of various complexities and properties -- SLU requires multi-task learning of text classification and sequence tagging, while DST requires the understanding of hierarchical and recurrent data structures. For each of the three tasks, we propose a task-specific latent variable model based on conditional, hierarchical and sequential variational autoencoders (VAE) for multi-modal joint modeling of linguistic features and the relevant annotations. We conduct extensive experiments to statistically justify our hypothesis that deep generative data augmentation is beneficial for all subject tasks. Our experiments show that deep generative data augmentation is effective for the select tasks, supporting the idea that the technique can potentially be utilized for other range of NLP tasks. Ablation and qualitative studies reveal deeper insight into the underlying mechanisms of generative data augmentation. As a secondary contribution, we also shed light onto the recurring posterior collapse phenomenon in autoregressive VAEs and, subsequently, propose novel techniques to reduce the model risk, which is crucial for proper training of complex VAE models, enabling them to synthesize better samples for data augmentation. In summary, this work intends to demonstrate and analyze the effectiveness of unsupervised generative data augmentation in NLP. Ultimately, our approach enables standardized adoption of generative data augmentation, which can be applied orthogonally to existing regularization techniques.์ตœ๊ทผ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ƒ์„ฑ ๋ชจ๋ธ์˜ ๊ธ‰๊ฒฉํ•œ ๋ฐœ์ „์œผ๋กœ ์ด๋ฅผ ์ด์šฉํ•œ ์ƒ์„ฑ ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•(generative data augmentation, GDA)์˜ ์‹คํ˜„ ๊ฐ€๋Šฅ์„ฑ์— ๋Œ€ํ•œ ๊ธฐ๋Œ€๊ฐ€ ์ปค์ง€๊ณ  ์žˆ๋‹ค. ์ƒ์„ฑ ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์€ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ž ์žฌ๋ณ€์ˆ˜ ๋ชจ๋ธ์—์„œ ์ƒ์„ฑ ๋œ ์ƒ˜ํ”Œ์„ ์›๋ณธ ๋ฐ์ดํ„ฐ์…‹์— ์ถ”๊ฐ€ํ•˜์—ฌ ์—ฐ๊ด€๋œ ํƒœ์Šคํฌ์˜ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๊ธฐ์ˆ ์„ ์˜๋ฏธํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ์ƒ์„ฑ ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์€ ๋ฐ์ดํ„ฐ ๊ณต๊ฐ„์—์„œ ์ด๋ค„์ง€๋Š” ์ •๊ทœํ™” ๊ธฐ์ˆ ์˜ ํ•œ ํ˜•ํƒœ๋กœ ๊ฐ„์ฃผ๋  ์ˆ˜ ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ƒ์„ฑ ๋ชจ๋ธ์˜ ์ƒˆ๋กœ์šด ํ™œ์šฉ ๊ฐ€๋Šฅ์„ฑ์€ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ถ„์•ผ์—์„œ ๋”์šฑ ์ค‘์š”ํ•˜๊ฒŒ ๋ถ€๊ฐ๋˜๋Š” ์ด์œ ๋Š” (1) ๋ฒ”์šฉ ๊ฐ€๋Šฅํ•œ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ์ˆ ์˜ ๋ถ€์žฌ์™€ (2) ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์˜ ํฌ์†Œ์„ฑ์„ ๊ทน๋ณตํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€์•ˆ์ด ํ•„์š”ํ•˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๋ฌธ์ œ์˜ ๋ณต์žก๋„์™€ ํŠน์ง•์„ ๊ณจ๊ณ ๋ฃจ ์ฑ„์ง‘ํ•˜๊ธฐ ์œ„ํ•ด ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ํ…์ŠคํŠธ ๋ถ„๋ฅ˜(text classification), ์ˆœ์ฐจ์  ๋ ˆ์ด๋ธ”๋ง๊ณผ ๋ฉ€ํ‹ฐํƒœ์Šคํ‚น ๊ธฐ์ˆ ์ด ํ•„์š”ํ•œ ๋ฐœํ™” ์ดํ•ด(spoken language understanding, SLU), ๊ณ„์ธต์ ์ด๋ฉฐ ์žฌ๊ท€์ ์ธ ๋ฐ์ดํ„ฐ ๊ตฌ์กฐ์— ๋Œ€ํ•œ ๊ณ ๋ ค๊ฐ€ ํ•„์š”ํ•œ ๋Œ€ํ™” ์ƒํƒœ ์ถ”์ (dialogue state tracking, DST) ๋“ฑ ์„ธ ๊ฐ€์ง€ ๋ฌธ์ œ์—์„œ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜ ์ƒ์„ฑ ๋ชจ๋ธ์„ ํ™œ์šฉํ•œ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์˜ ํƒ€๋‹น์„ฑ์— ๋Œ€ํ•ด ๋‹ค๋ฃฌ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์กฐ๊ฑด๋ถ€, ๊ณ„์ธต์  ๋ฐ ์ˆœ์ฐจ์  variational autoencoder (VAE)์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ๊ฐ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ฌธ์ œ์— ํŠนํ™”๋œ ํ…์ŠคํŠธ ๋ฐ ์—ฐ๊ด€ ๋ถ€์ฐฉ ์ •๋ณด๋ฅผ ๋™์‹œ์— ์ƒ์„ฑํ•˜๋Š” ํŠน์ˆ˜ ๋”ฅ๋Ÿฌ๋‹ ์ƒ์„ฑ ๋ชจ๋ธ๋“ค์„ ์ œ์‹œํ•˜๊ณ , ๋‹ค์–‘ํ•œ ํ•˜๋ฅ˜ ๋ชจ๋ธ๊ณผ ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค๋ฃจ๋Š” ๋“ฑ ํญ ๋„“์€ ์‹คํ—˜์„ ํ†ตํ•ด ๋”ฅ ์ƒ์„ฑ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์˜ ํšจ๊ณผ๋ฅผ ํ†ต๊ณ„์ ์œผ๋กœ ์ž…์ฆํ•˜์˜€๋‹ค. ๋ถ€์ˆ˜์  ์—ฐ๊ตฌ์—์„œ๋Š” ์ž๊ธฐํšŒ๊ท€์ (autoregressive) VAE์—์„œ ๋นˆ๋ฒˆํžˆ ๋ฐœ์ƒํ•˜๋Š” posterior collapse ๋ฌธ์ œ์— ๋Œ€ํ•ด ํƒ๊ตฌํ•˜๊ณ , ํ•ด๋‹น ๋ฌธ์ œ๋ฅผ ์™„ํ™”ํ•  ์ˆ˜ ์žˆ๋Š” ์‹ ๊ทœ ๋ฐฉ์•ˆ๋„ ์ œ์•ˆํ•œ๋‹ค. ํ•ด๋‹น ๋ฐฉ๋ฒ•์„ ์ƒ์„ฑ์  ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์— ํ•„์š”ํ•œ ๋ณต์žกํ•œ VAE ๋ชจ๋ธ์— ์ ์šฉํ•˜์˜€์„ ๋•Œ, ์ƒ์„ฑ ๋ชจ๋ธ์˜ ์ƒ์„ฑ ์งˆ์ด ํ–ฅ์ƒ๋˜์–ด ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ํšจ๊ณผ์—๋„ ๊ธ์ •์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ์Œ์„ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ๋ณธ ๋…ผ๋ฌธ์„ ํ†ตํ•ด ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ถ„์•ผ์—์„œ ๊ธฐ์กด ์ •๊ทœํ™” ๊ธฐ๋ฒ•๊ณผ ๋ณ‘ํ–‰ ์ ์šฉ ๊ฐ€๋Šฅํ•œ ๋น„์ง€๋„ ํ˜•ํƒœ์˜ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๊ธฐ๋ฒ•์˜ ํ‘œ์ค€ํ™”๋ฅผ ๊ธฐ๋Œ€ํ•ด ๋ณผ ์ˆ˜ ์žˆ๋‹ค.1 Introduction 1 1.1 Motivation 1 1.2 Dissertation Overview 6 2 Background and Related Work 8 2.1 Deep Latent Variable Models 8 2.1.1 Variational Autoencoder (VAE) 10 2.1.2 Deep Generative Models and Text Generation 12 2.2 Data Augmentation 12 2.2.1 General Description 13 2.2.2 Categorization of Data Augmentation 14 2.2.3 Theoretical Explanations 21 2.3 Summary 24 3 Basic Task: Text Classi cation 25 3.1 Introduction 25 3.2 Our Approach 28 3.2.1 Proposed Models 28 3.2.2 Training with I-VAE 29 3.3 Experiments 31 3.3.1 Datasets 32 3.3.2 Experimental Settings 33 3.3.3 Implementation Details 34 3.3.4 Data Augmentation Results 36 3.3.5 Ablation Studies 39 3.3.6 Qualitative Analysis 40 3.4 Summary 45 4 Multi-task Learning: Spoken Language Understanding 46 4.1 Introduction 46 4.2 Related Work 48 4.3 Model Description 48 4.3.1 Framework Formulation 48 4.3.2 Joint Generative Model 49 4.4 Experiments 56 4.4.1 Datasets 56 4.4.2 Experimental Settings 57 4.4.3 Generative Data Augmentation Results 61 4.4.4 Comparison to Other State-of-the-art Results 63 4.4.5 Ablation Studies 63 4.5 Summary 67 5 Complex Data: Dialogue State Tracking 68 5.1 Introduction 68 5.2 Background and Related Work 70 5.2.1 Task-oriented Dialogue 70 5.2.2 Dialogue State Tracking 72 5.2.3 Conversation Modeling 72 5.3 Variational Hierarchical Dialogue Autoencoder (VHDA) 73 5.3.1 Notations 73 5.3.2 Variational Hierarchical Conversational RNN 74 5.3.3 Proposed Model 75 5.3.4 Posterior Collapse 82 5.4 Experimental Results 84 5.4.1 Experimental Settings 84 5.4.2 Data Augmentation Results 90 5.4.3 Intrinsic Evaluation - Language Evaluation 94 5.4.4 Qualitative Results 95 5.5 Summary 101 6 Conclusion 103 6.1 Summary 103 6.2 Limitations 104 6.3 Future Work 105Docto

    Studentsยด language in computer-assisted tutoring of mathematical proofs

    Get PDF
    Truth and proof are central to mathematics. Proving (or disproving) seemingly simple statements often turns out to be one of the hardest mathematical tasks. Yet, doing proofs is rarely taught in the classroom. Studies on cognitive difficulties in learning to do proofs have shown that pupils and students not only often do not understand or cannot apply basic formal reasoning techniques and do not know how to use formal mathematical language, but, at a far more fundamental level, they also do not understand what it means to prove a statement or even do not see the purpose of proof at all. Since insight into the importance of proof and doing proofs as such cannot be learnt other than by practice, learning support through individualised tutoring is in demand. This volume presents a part of an interdisciplinary project, set at the intersection of pedagogical science, artificial intelligence, and (computational) linguistics, which investigated issues involved in provisioning computer-based tutoring of mathematical proofs through dialogue in natural language. The ultimate goal in this context, addressing the above-mentioned need for learning support, is to build intelligent automated tutoring systems for mathematical proofs. The research presented here has been focused on the language that students use while interacting with such a system: its linguistic propeties and computational modelling. Contribution is made at three levels: first, an analysis of language phenomena found in studentsยด input to a (simulated) proof tutoring system is conducted and the variety of studentsยด verbalisations is quantitatively assessed, second, a general computational processing strategy for informal mathematical language and methods of modelling prominent language phenomena are proposed, and third, the prospects for natural language as an input modality for proof tutoring systems is evaluated based on collected corpora
    • โ€ฆ
    corecore