1,024 research outputs found

    자기회귀모델 기반 텍스트 생성을 위한 효과적인 학습 방법에 관한 연구

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 전기·정보공학부, 2021.8. 김효석.The rise of deep neural networks has promoted tremendous advances in natural language processing research. Natural language generation is a subfield of natural language processing, which is inevitable in building a human-like artificial intelligence since they take responsibility for delivering the decision-making of machines in natural language. For neural network-based text generation techniques, which have achieved most state-of-the-art performance, autoregressive methods are generally adapted because of their correspondence to the word-by-word nature of human language production. In this dissertation, we investigate two different ways to train autoregressive text generation models, which are based on deep neural networks. We first focus on a token-level training of question generation, which aims to generate a question related to a given input passage. The proposed Answer-Separated Seq2Seq effectively mitigates a problem from the previous question generation models that a significant proportion of the generated questions include words in the target answer. While autoregressive methods are primarily trained with maximum likelihood estimation, they suffer from several problems, such as exposure bias. As a remedy, we propose a sequence-level GAN-based approach for text generation that promotes collaborative training in both continuous and discrete representations of text. To aggregate the achievement of the research mentioned above, we finally propose a novel way of training a sequence-level question generation model, adopting a pre-trained language model, one of the most significant breakthroughs in natural language processing, along with Proximal Policy Optimization.자연어 처리 연구는 딥 뉴럴넷의 도입으로 인해 대대적인 발전을 거쳤다. 자연어 처리 연구의 일종인 자연어 생성은 기계가 내린 결정을 사람이 이해할 수 있도록 전달하는 기능이 있다, 그렇기에 사람을 모방하는 인공지능 시스템을 구축하는 데에 있어 필수 불가결한 요소이다. 일반적으로 뉴럴넷 기반의 텍스트 생성 태스크에서는 자동회귀 방법론들이 주로 사용되는데, 이는 사람의 언어 생성 과정과 유사한 양상을 띠기 때문이다. 본 학위 논문에서는 두 가지 뉴럴넷 기반의 자동회귀 텍스트 생성 모델 학습 기법에 대해 제안한다. 첫 번째 방법론에서는 토큰 레벨에서의 질문 생성 모델 학습 방법에 대해 소개한다. 논문에서 제안하는 답변 분리 시퀀스-투-시퀀스 모델은 기존에 존재하는 질문 생성 모델로 생성된 질문이 답변에 해당하는 내용을 포함하는 문제점을 효과적으로 해결한다. 주로 최대 우도 추정법을 통해 학습되는 자동회귀 방법론에는 노출 편향 등과 같은 문제점이 존재한다. 이러한 문제점을 해결하기 위해 논문에서는 텍스트의 연속 공간 표현과 이산 공간 표현 모두에 대해 상호보완적으로 학습하는 시퀀스 레벨의 적대 신경망 기반의 텍스트 생성 기법을 제안한다. 마지막으로 앞선 방법론들을 종합하여 시퀀스 레벨의 질문 생성기법을 제안하며, 이러한 과정에서 최신 자연어 처리 방법 중 하나인 사전 학습 언어 모델과 근위 정책 최적화 방법을 이용한다.1 INTRODUCTION 1 1.1 Contributions 4 2 BACKGROUND 8 2.1 Sequence-to-Sequence model 8 2.1.1 Sequence-to-Sequence model with Attention Mechanism 8 2.2 Autoregressive text generation 11 2.2.1 Maximum Likelihood Training 11 2.2.2 Pros and cons of autoregressive methods 11 2.3 Non-autoregressive text generation 13 2.4 Transformers 13 2.5 Reinforcement Learning 16 2.5.1 Policy Gradient 17 3 TOKEN-LEVEL TRAINING OF CONDITIONAL TEXT GENERATION MODEL 19 3.1 Related Work 22 3.2 Task Definition 23 3.3 Base Model: Encoder-Decoder with Attention 23 3.4 Answer-Separated Seq2Seq 25 3.4.1 Encoder 27 3.4.2 Answer-Separated Decoder 28 3.5 Experimental Settings 30 3.5.1 Dataset 30 3.5.2 Implementation Details 30 3.5.3 Evaluation Methods 32 3.6 Results 32 3.6.1 Performance Comparison 32 3.6.2 Impact of Answer Separation 34 3.6.3 Question Generation for Machine Comprehension 36 3.7 Conclusion 38 4 SEQUENCE-LEVEL TRAINING OF UNCONDITIONAL TEXT GENERATION 40 4.1 Background 42 4.1.1 Generative Adversarial Networks 42 4.1.2 Continuous-space Methods 44 4.1.3 Discrete-space Methods 44 4.2 ConcreteGAN 45 4.2.1 Autoencoder Reconstruction 45 4.2.2 Adversarial Training in the Latent Code Space 47 4.2.3 Adversarial Training with Textual Outputs 48 4.3 Experiments 49 4.3.1 Dataset 50 4.3.2 Experimental Settings 50 4.3.3 Evaluation Metrics 51 4.3.4 Experimental Results for Quality & Diversity 52 4.3.5 Experimental Results for FD score 56 4.3.6 Human Evaluation 56 4.3.7 Analyses of Code Space 57 4.4 Conclusion 60 5 SEQUENCE-LEVEL TRAINING OF CONDITIONAL TEXT GENERATION 61 5.1 Introduction 61 5.2 Background 63 5.2.1 Pre-trained Language Model 63 5.2.2 Proximal Policy Optimization 70 5.3 Methods 72 5.3.1 Step One: Token-level Fine-tuning 72 5.3.2 Step Two: Sequence-level Fine-tuning with Question-specific Reward 72 5.4 Experiments 74 5.4.1 Implementation Details 75 5.4.2 Quantitative Analysis 76 5.4.3 Qualitative Analysis 76 5.5 Conclusion 78 6 CONCLUSION 80 7 APPENDIX* 82 7.1 Generated Samples 82 7.2 Comparison of ARAE and ARAE* 84 7.3 Human Evaluation Criteria 85박

    Set-to-Sequence Methods in Machine Learning: A Review

    Get PDF
    Machine learning on sets towards sequential output is an important and ubiquitous task, with applications ranging from language modelling and meta-learning to multi-agent strategy games and power grid optimization. Combining elements of representation learning and structured prediction, its two primary challenges include obtaining a meaningful, permutation invariant set representation and subsequently utilizing this representation to output a complex target permutation. This paper provides a comprehensive introduction to the field as well as an overview of important machine learning methods tackling both of these key challenges, with a detailed qualitative comparison of selected model architectures.Comment: 46 pages of text, with 10 pages of references. Contains 2 tables and 4 figure

    Beam Tree Recursive Cells

    Full text link
    We propose Beam Tree Recursive Cell (BT-Cell) - a backpropagation-friendly framework to extend Recursive Neural Networks (RvNNs) with beam search for latent structure induction. We further extend this framework by proposing a relaxation of the hard top-k operators in beam search for better propagation of gradient signals. We evaluate our proposed models in different out-of-distribution splits in both synthetic and realistic data. Our experiments show that BTCell achieves near-perfect performance on several challenging structure-sensitive synthetic tasks like ListOps and logical inference while maintaining comparable performance in realistic data against other RvNN-based models. Additionally, we identify a previously unknown failure case for neural models in generalization to unseen number of arguments in ListOps. The code is available at: https://github.com/JRC1995/BeamTreeRecursiveCells.Comment: Accepted in ICML 202

    The Catalog Problem:Deep Learning Methods for Transforming Sets into Sequences of Clusters

    Get PDF
    The titular Catalog Problem refers to predicting a varying number of ordered clusters from sets of any cardinality. This task arises in many diverse areas, ranging from medical triage, through multi-channel signal analysis for petroleum exploration to product catalog structure prediction. This thesis focuses on the latter, which exemplifies a number of challenges inherent to ordered clustering. These include learning variable cluster constraints, exhibiting relational reasoning and managing combinatorial complexity. All of which present unique challenges for neural networks, combining elements of set representation, neural clustering and permutation learning.In order to approach the Catalog Problem, a curated dataset of over ten thousand real-world product catalogs consisting of more than one million product offers is provided. Additionally, a library for generating simpler, synthetic catalog structures is presented. These and other datasets form the foundation of the included work, allowing for a quantitative comparison of the proposed methods’ ability to address the underlying challenge. In particular, synthetic datasets enable the assessment of the models’ capacity to learn higher order compositional and structural rules.Two novel neural methods are proposed to tackle the Catalog Problem, a set encoding module designed to enhance the network’s ability to condition the prediction on the entirety of the input set, and a larger architecture for inferring an input- dependent number of diverse, ordered partitional clusters with an added cardinality prediction module. Both result in an improved performance on the presented datasets, with the latter being the only neural method fulfilling all requirements inherent to addressing the Catalog Problem

    Hidden Markov models and neural networks for speech recognition

    Get PDF
    The Hidden Markov Model (HMMs) is one of the most successful modeling approaches for acoustic events in speech recognition, and more recently it has proven useful for several problems in biological sequence analysis. Although the HMM is good at capturing the temporal nature of processes such as speech, it has a very limited capacity for recognizing complex patterns involving more than first order dependencies in the observed data sequences. This is due to the first order state process and the assumption of state conditional independence between observations. Artificial Neural Networks (NNs) are almost the opposite: they cannot model dynamic, temporally extended phenomena very well, but are good at static classification and regression tasks. Combining the two frameworks in a sensible way can therefore lead to a more powerful model with better classification abilities. The overall aim of this work has been to develop a probabilistic hybrid of hidden Markov models and neural networks and ..

    Deep latent-variable models for neural text generation

    Get PDF
    Text generation aims to produce human-like natural language output for down-stream tasks. It covers a wide range of applications like machine translation, document summarization, dialogue generation and so on. Recently deep neural network-based end-to-end architectures are known to be data-hungry, and text generated from them usually suffer from low diversity, interpretability and controllability. As a result, it is difficult to trust the output from them in real-life applications. Deep latent-variable models, by specifying the probabilistic distribution over an intermediate latent process, provide a potential way of addressing these problems while maintaining the expressive power of deep neural networks. This presentation will explain how deep latent-variable models can improve over the standard encoder-decoder model for text generation. We will start from an introduction of encoder-decoder and deep latent-variable models, then go over popular optimization strategies, and finally elaborate on how latent variable models can help improve the diversity, interpretability and data efficiency in different applications of text generation tasks.Textgenerierung zielt darauf ab, eine menschenähnliche Textausgabe in natürlicher Sprache für Anwendungen zu erzeugen. Es deckt eine breite Palette von Anwendungen ab, wie maschinelle Übersetzung, Zusammenfassung von Dokumenten, Generierung von Dialogen usw. In letzter Zeit werden dafür hauptsächlich Endto- End-Architekturen auf der Basis von tiefen neuronalen Netzwerken verwendet. Der End-to-End-Ansatz fasst alle Submodule, die früher nach komplexen handgefertigten Regeln entworfen wurden, zu einer ganzheitlichen Codierungs- Decodierungs-Architektur zusammen. Bei ausreichenden Trainingsdaten kann eine Leistung auf dem neuesten Stand der Technik erzielt werden, ohne dass sprach- und domänenabhängiges Wissen erforderlich ist. Deep-Learning-Modelle sind jedoch als extrem datenhungrig bekannt und daraus generierter Text leidet normalerweise unter geringer Diversität, Interpretierbarkeit und Kontrollierbarkeit. Infolgedessen ist es schwierig, der Ausgabe von ihnen in realen Anwendungen zu vertrauen. Tiefe Modelle mit latenten Variablen bieten durch Angabe der Wahrscheinlichkeitsverteilung über einen latenten Zwischenprozess eine potenzielle Möglichkeit, diese Probleme zu lösen und gleichzeitig die Ausdruckskraft tiefer neuronaler Netze zu erhalten. Diese Dissertation zeigt, wie tiefe Modelle mit latenten Variablen Texterzeugung verbessern gegenüber dem üblichen Encoder-Decoder-Modell. Wir beginnen mit einer Einführung in Encoder-Decoder- und Deep Latent Variable-Modelle und gehen dann auf gängige Optimierungsstrategien wie Variationsinferenz, dynamische Programmierung, Soft Relaxation und Reinforcement Learning ein. Danach präsentieren wir Folgendes: 1. Wie latente Variablen Vielfalt der Texterzeugung verbessern können, indem ganzheitliche, latente Darstellungen auf Satzebene gelernt werden. Auf diese Weise kann zunächst eine latente Darstellung ausgewählt werden, aus der verschiedene Texte generiert werden können. Wir präsentieren effektive Algorithmen, um gleichzeitig das Lernen der Repräsentation und die Texterzeugung durch Variationsinferenz zu trainieren. Um die Einschränkungen der Variationsinferenz bezüglich Uni-Modalität und Inkonsistenz anzugehen, schlagen wir eine Wake-Sleep-Variation und ein auf Transinformation basierendes Trainingsziel vor. Experimente zeigen, dass sie sowohl die übliche Variationsinferenz als auch nicht-latente Variablenmodelle bei der Dialoggenerierung übertreffen. 2. Wie latente Variablen die Steuerbarkeit und Interpretierbarkeit der Texterzeugung verbessern können, indem feinkörnigere latente Spezifikationen zum Zwischengenerierungsprozess hinzugefügt werden. Wir veranschaulichen die Verwendung latenter Variablen für Wortausrichtung, Inhaltsauswahl, Textsegmentierung und Feldsegmentkorrespondenz. Wir leiten für sie effiziente Trainingsalgorithmen ab, damit die Texterzeugung explizit gesteuert werden kann, indem die latente Variable, die durch ihre Definition vom Menschen interpretiert werden kann, manipuliert wird. 3. Überwindung der Seltenheit von Trainingsmustern durch Behandlung von nicht parallelem Text als latente Variablen. Das Training kann wie beim Standard-EM-Algorithmus durchgeführt werden, der stabil konvergiert. Wir zeigen, dass es bei der Dialoggenerierung erfolgreich angewendet werden kann und den Generierungsraum durch die Verwendung von nicht-konversativem Text erheblich bereichert
    corecore