420 research outputs found
Natural Language based Context Modeling and Reasoning with LLMs: A Tutorial
Large language models (LLMs) have become phenomenally surging, since
2018--two decades after introducing context-awareness into computing systems.
Through taking into account the situations of ubiquitous devices, users and the
societies, context-aware computing has enabled a wide spectrum of innovative
applications, such as assisted living, location-based social network services
and so on. To recognize contexts and make decisions for actions accordingly,
various artificial intelligence technologies, such as Ontology and OWL, have
been adopted as representations for context modeling and reasoning. Recently,
with the rise of LLMs and their improved natural language understanding and
reasoning capabilities, it has become feasible to model contexts using natural
language and perform context reasoning by interacting with LLMs such as ChatGPT
and GPT-4. In this tutorial, we demonstrate the use of texts, prompts, and
autonomous agents (AutoAgents) that enable LLMs to perform context modeling and
reasoning without requiring fine-tuning of the model. We organize and introduce
works in the related field, and name this computing paradigm as the LLM-driven
Context-aware Computing (LCaC). In the LCaC paradigm, users' requests, sensors
reading data, and the command to actuators are supposed to be represented as
texts. Given the text of users' request and sensor data, the AutoAgent models
the context by prompting and sends to the LLM for context reasoning. LLM
generates a plan of actions and responds to the AutoAgent, which later follows
the action plan to foster context-awareness. To prove the concepts, we use two
showcases--(1) operating a mobile z-arm in an apartment for assisted living,
and (2) planning a trip and scheduling the itinerary in a context-aware and
personalized manner.Comment: Under revie
Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies
Large language models (LLMs) have demonstrated remarkable performance across
a wide array of NLP tasks. However, their efficacy is undermined by undesired
and inconsistent behaviors, including hallucination, unfaithful reasoning, and
toxic content. A promising approach to rectify these flaws is self-correction,
where the LLM itself is prompted or guided to fix problems in its own output.
Techniques leveraging automated feedback -- either produced by the LLM itself
or some external system -- are of particular interest as they are a promising
way to make LLM-based solutions more practical and deployable with minimal
human feedback. This paper presents a comprehensive review of this emerging
class of techniques. We analyze and taxonomize a wide array of recent work
utilizing these strategies, including training-time, generation-time, and
post-hoc correction. We also summarize the major applications of this strategy
and conclude by discussing future directions and challenges.Comment: Work in Progress. Version
μκΈ°νκ·λͺ¨λΈ κΈ°λ° ν μ€νΈ μμ±μ μν ν¨κ³Όμ μΈ νμ΅ λ°©λ²μ κ΄ν μ°κ΅¬
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : 곡과λν μ κΈ°Β·μ 보곡νλΆ, 2021.8. κΉν¨μ.The rise of deep neural networks has promoted tremendous advances in natural language processing research. Natural language generation is a subfield of natural language processing, which is inevitable in building a human-like artificial intelligence since they take responsibility for delivering the decision-making of machines in natural language. For neural network-based text generation techniques, which have achieved most state-of-the-art performance, autoregressive methods are generally adapted because of their correspondence to the word-by-word nature of human language production. In this dissertation, we investigate two different ways to train autoregressive text generation models, which are based on deep neural networks. We first focus on a token-level training of question generation, which aims to generate a question related to a given input passage. The proposed Answer-Separated Seq2Seq effectively mitigates a problem from the previous question generation models that a significant proportion of the generated questions include words in the target answer. While autoregressive methods are primarily trained with maximum likelihood estimation, they suffer from several problems, such as exposure bias. As a remedy, we propose a sequence-level GAN-based approach for text generation that promotes collaborative training in both continuous and discrete representations of text. To aggregate the achievement of the research mentioned above, we finally propose a novel way of training a sequence-level question generation model, adopting a pre-trained language model, one of the most significant breakthroughs in natural language processing, along with Proximal Policy Optimization.μμ°μ΄ μ²λ¦¬ μ°κ΅¬λ λ₯ λ΄λ΄λ·μ λμ
μΌλ‘ μΈν΄ λλμ μΈ λ°μ μ κ±°μ³€λ€. μμ°μ΄ μ²λ¦¬ μ°κ΅¬μ μΌμ’
μΈ μμ°μ΄ μμ±μ κΈ°κ³κ° λ΄λ¦° κ²°μ μ μ¬λμ΄ μ΄ν΄ν μ μλλ‘ μ λ¬νλ κΈ°λ₯μ΄ μλ€, κ·Έλ κΈ°μ μ¬λμ λͺ¨λ°©νλ μΈκ³΅μ§λ₯ μμ€ν
μ ꡬμΆνλ λ°μ μμ΄ νμ λΆκ°κ²°ν μμμ΄λ€. μΌλ°μ μΌλ‘ λ΄λ΄λ· κΈ°λ°μ ν
μ€νΈ μμ± νμ€ν¬μμλ μλνκ· λ°©λ²λ‘ λ€μ΄ μ£Όλ‘ μ¬μ©λλλ°, μ΄λ μ¬λμ μΈμ΄ μμ± κ³Όμ κ³Ό μ μ¬ν μμμ λ κΈ° λλ¬Έμ΄λ€. λ³Έ νμ λ
Όλ¬Έμμλ λ κ°μ§ λ΄λ΄λ· κΈ°λ°μ μλνκ· ν
μ€νΈ μμ± λͺ¨λΈ νμ΅ κΈ°λ²μ λν΄ μ μνλ€. 첫 λ²μ§Έ λ°©λ²λ‘ μμλ ν ν° λ 벨μμμ μ§λ¬Έ μμ± λͺ¨λΈ νμ΅ λ°©λ²μ λν΄ μκ°νλ€. λ
Όλ¬Έμμ μ μνλ λ΅λ³ λΆλ¦¬ μνμ€-ν¬-μνμ€ λͺ¨λΈμ κΈ°μ‘΄μ μ‘΄μ¬νλ μ§λ¬Έ μμ± λͺ¨λΈλ‘ μμ±λ μ§λ¬Έμ΄ λ΅λ³μ ν΄λΉνλ λ΄μ©μ ν¬ν¨νλ λ¬Έμ μ μ ν¨κ³Όμ μΌλ‘ ν΄κ²°νλ€. μ£Όλ‘ μ΅λ μ°λ μΆμ λ²μ ν΅ν΄ νμ΅λλ μλνκ· λ°©λ²λ‘ μλ λ
ΈμΆ νΈν₯ λ±κ³Ό κ°μ λ¬Έμ μ μ΄ μ‘΄μ¬νλ€. μ΄λ¬ν λ¬Έμ μ μ ν΄κ²°νκΈ° μν΄ λ
Όλ¬Έμμλ ν
μ€νΈμ μ°μ κ³΅κ° ννκ³Ό μ΄μ° κ³΅κ° νν λͺ¨λμ λν΄ μνΈλ³΄μμ μΌλ‘ νμ΅νλ μνμ€ λ 벨μ μ λ μ κ²½λ§ κΈ°λ°μ ν
μ€νΈ μμ± κΈ°λ²μ μ μνλ€. λ§μ§λ§μΌλ‘ μμ λ°©λ²λ‘ λ€μ μ’
ν©νμ¬ μνμ€ λ 벨μ μ§λ¬Έ μμ±κΈ°λ²μ μ μνλ©°, μ΄λ¬ν κ³Όμ μμ μ΅μ μμ°μ΄ μ²λ¦¬ λ°©λ² μ€ νλμΈ μ¬μ νμ΅ μΈμ΄ λͺ¨λΈκ³Ό κ·Όμ μ μ±
μ΅μ ν λ°©λ²μ μ΄μ©νλ€.1 INTRODUCTION 1
1.1 Contributions 4
2 BACKGROUND 8
2.1 Sequence-to-Sequence model 8
2.1.1 Sequence-to-Sequence model with Attention Mechanism 8
2.2 Autoregressive text generation 11
2.2.1 Maximum Likelihood Training 11
2.2.2 Pros and cons of autoregressive methods 11
2.3 Non-autoregressive text generation 13
2.4 Transformers 13
2.5 Reinforcement Learning 16
2.5.1 Policy Gradient 17
3 TOKEN-LEVEL TRAINING OF CONDITIONAL TEXT GENERATION MODEL 19
3.1 Related Work 22
3.2 Task Definition 23
3.3 Base Model: Encoder-Decoder with Attention 23
3.4 Answer-Separated Seq2Seq 25
3.4.1 Encoder 27
3.4.2 Answer-Separated Decoder 28
3.5 Experimental Settings 30
3.5.1 Dataset 30
3.5.2 Implementation Details 30
3.5.3 Evaluation Methods 32
3.6 Results 32
3.6.1 Performance Comparison 32
3.6.2 Impact of Answer Separation 34
3.6.3 Question Generation for Machine Comprehension 36
3.7 Conclusion 38
4 SEQUENCE-LEVEL TRAINING OF UNCONDITIONAL TEXT GENERATION 40
4.1 Background 42
4.1.1 Generative Adversarial Networks 42
4.1.2 Continuous-space Methods 44
4.1.3 Discrete-space Methods 44
4.2 ConcreteGAN 45
4.2.1 Autoencoder Reconstruction 45
4.2.2 Adversarial Training in the Latent Code Space 47
4.2.3 Adversarial Training with Textual Outputs 48
4.3 Experiments 49
4.3.1 Dataset 50
4.3.2 Experimental Settings 50
4.3.3 Evaluation Metrics 51
4.3.4 Experimental Results for Quality & Diversity 52
4.3.5 Experimental Results for FD score 56
4.3.6 Human Evaluation 56
4.3.7 Analyses of Code Space 57
4.4 Conclusion 60
5 SEQUENCE-LEVEL TRAINING OF CONDITIONAL TEXT GENERATION 61
5.1 Introduction 61
5.2 Background 63
5.2.1 Pre-trained Language Model 63
5.2.2 Proximal Policy Optimization 70
5.3 Methods 72
5.3.1 Step One: Token-level Fine-tuning 72
5.3.2 Step Two: Sequence-level Fine-tuning with Question-specific Reward 72
5.4 Experiments 74
5.4.1 Implementation Details 75
5.4.2 Quantitative Analysis 76
5.4.3 Qualitative Analysis 76
5.5 Conclusion 78
6 CONCLUSION 80
7 APPENDIX* 82
7.1 Generated Samples 82
7.2 Comparison of ARAE and ARAE* 84
7.3 Human Evaluation Criteria 85λ°
Leveraging Feedback in Conversational Question Answering Systems
172 p.Tesi honen helburua martxan jarri eta geroko sistemek gizakiekin duten elkarregina erabiltzeada, gizakien feedbacka sistementzako ikasketa eta egokitzapen seinale bezala erabiliz.Elkarrizketa sistemek martxan jartzerakoan jasaten duten domeinu aldaketan jartzen dugufokua. Helburu honetarako, feedback bitar esplizituaren kasua aztertzen dugu, hau baitagizakientzat feedbacka emateko seinale errazena.Sistemak martxan jarri eta gero hobetzeko, lehenik eta behin DoQA izeneko galdera-erantzunmotako elkarriketez osatutako datu multzo bat eraiki dugu. Datu multzo honekcrowdsourcing bidez jasotako 2.437 dialogo ditu. Aurreko lanekin konparatuz gero, DoQAkbenetazko informazio beharrak islatzen ditu, datu multzo barneko elkarrizketak naturalagoaketa koherenteagoak izanik. Datu multzo sortu eta gero, feedback-weighted learning (FWL)izeneko algoritmo bat diseinatu dugu, feedback bitarra bakarrik erabiliz aurretikentrenatutako sistema gainbegiratu bat hobetzeko gai dena. Azkenik, algoritmo honen mugakaztertzen ditugu jasotako feedbacka zaratatsua den kasuetarako eta FWL moldatzen dugueszenatoki zaratsuari aurre egiteko. Kasu honetan lortzen ditugun emaitza negatiboakerakusten dute erabiltzaileetatik jasotako feedback zaratsua modelatzearen erronka, hauebaztea oraindik ikerkuntza galdera ireki bat delarik
An In-depth Investigation of User Response Simulation for Conversational Search
Conversational search has seen increased recent attention in both the IR and
NLP communities. It seeks to clarify and solve a user's search need through
multi-turn natural language interactions. However, most existing systems are
trained and demonstrated with recorded or artificial conversation logs.
Eventually, conversational search systems should be trained, evaluated, and
deployed in an open-ended setting with unseen conversation trajectories. A key
challenge is that training and evaluating such systems both require a
human-in-the-loop, which is expensive and does not scale. One strategy for this
is to simulate users, thereby reducing the scaling costs. However, current user
simulators are either limited to only respond to yes-no questions from the
conversational search system, or unable to produce high quality responses in
general.
In this paper, we show that current state-of-the-art user simulation system
could be significantly improved by replacing it with a smaller but advanced
natural language generation model. But rather than merely reporting this new
state-of-the-art, we present an in-depth investigation of the task of
simulating user response for conversational search. Our goal is to supplement
existing works with an insightful hand-analysis of what challenges are still
unsolved by the advanced model, as well as to propose our solutions for them.
The challenges we identified include (1) dataset noise, (2) a blind spot that
is difficult for existing models to learn, and (3) a specific type of
misevaluation in the standard empirical setup. Except for the dataset noise
issue, we propose solutions to cover the training blind spot and to avoid the
misevaluation. Our proposed solutions lead to further improvements. Our best
system improves the previous state-of-the-art significantly.Comment: 9 page
- β¦