202,159 research outputs found
The Consensus Game: Language Model Generation via Equilibrium Search
When applied to question answering and other text generation tasks, language
models (LMs) may be queried generatively (by sampling answers from their output
distribution) or discriminatively (by using them to score or rank a set of
candidate outputs). These procedures sometimes yield very different
predictions. How do we reconcile mutually incompatible scoring procedures to
obtain coherent LM predictions? We introduce a new, a training-free,
game-theoretic procedure for language model decoding. Our approach casts
language model decoding as a regularized imperfect-information sequential
signaling game - which we term the CONSENSUS GAME - in which a GENERATOR seeks
to communicate an abstract correctness parameter using natural language
sentences to a DISCRIMINATOR. We develop computational procedures for finding
approximate equilibria of this game, resulting in a decoding algorithm we call
EQUILIBRIUM-RANKING. Applied to a large number of tasks (including reading
comprehension, commonsense reasoning, mathematical problem-solving, and
dialog), EQUILIBRIUM-RANKING consistently, and sometimes substantially,
improves performance over existing LM decoding procedures - on multiple
benchmarks, we observe that applying EQUILIBRIUM-RANKING to LLaMA-7B
outperforms the much larger LLaMA-65B and PaLM-540B models. These results
highlight the promise of game-theoretic tools for addressing fundamental
challenges of truthfulness and consistency in LMs
Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning
Text-based adventure games provide a platform on which to explore
reinforcement learning in the context of a combinatorial action space, such as
natural language. We present a deep reinforcement learning architecture that
represents the game state as a knowledge graph which is learned during
exploration. This graph is used to prune the action space, enabling more
efficient exploration. The question of which action to take can be reduced to a
question-answering task, a form of transfer learning that pre-trains certain
parts of our architecture. In experiments using the TextWorld framework, we
show that our proposed technique can learn a control policy faster than
baseline alternatives. We have also open-sourced our code at
https://github.com/rajammanabrolu/KG-DQN.Comment: Proceedings of NAACL-HLT 201
Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval
While there have been many proposals on making AI algorithms explainable, few
have attempted to evaluate the impact of AI-generated explanations on human
performance in conducting human-AI collaborative tasks. To bridge the gap, we
propose a Twenty-Questions style collaborative image retrieval game,
Explanation-assisted Guess Which (ExAG), as a method of evaluating the efficacy
of explanations (visual evidence or textual justification) in the context of
Visual Question Answering (VQA). In our proposed ExAG, a human user needs to
guess a secret image picked by the VQA agent by asking natural language
questions to it. We show that overall, when AI explains its answers, users
succeed more often in guessing the secret image correctly. Notably, a few
correct explanations can readily improve human performance when VQA answers are
mostly incorrect as compared to no-explanation games. Furthermore, we also show
that while explanations rated as "helpful" significantly improve human
performance, "incorrect" and "unhelpful" explanations can degrade performance
as compared to no-explanation games. Our experiments, therefore, demonstrate
that ExAG is an effective means to evaluate the efficacy of AI-generated
explanations on a human-AI collaborative task.Comment: 2019 AAAI Conference on Human Computation and Crowdsourcin
Do neural-network question answering systems have a role to play in the deployment of information systems?
As Internet users become more numerous, experienced and skillful, and the number of companies doing ecommerce increases worldwide, so does the demand for online information about products and services. To satisfy this increasing demand for on-line information many companies have resorted to providing customer support services over a variety of on-line means of communication such as e.mail, chat services, voice on internet protocol (VOIP), etc. This article presents a stepwise approach to the construction of hybrid question answering systems based upon neural network technologies and natural language processing. These special kind of information systems not only provides high speed answers to questions posed by customers,
but they also allow customers to receive answers to their questions on a 24/7 basis, provide well conceived standard answers to those questions, allow for a precise recording of customer communication, and make the management of customer support services easier. All of this is made clear by a case study about the development of an automatic question answering systems to the âSEBRAE Challengeâ, a business game involving university students in seven different countries in South America
Recommended from our members
El Mundo de Comida : the relative effectiveness of digital game feedback and classroom feedback in helping students learn Spanish food vocabulary
textFeedback has been defined as âhelpful information or criticism that is given to someone to say what can be done to improve a performance, product, etc.â (Merriam-Webster, 2014) Within the field of Second Language Acquisition (SLA) researchers have shown that language learners acquire languages best when they are provided with feedback (Gass & Selinker, 2008; Loewen, 2012). Because of the importance of feedback to the language learning process, there is an ongoing line of investigation that seeks to determine whether differences in how and when feedback is provided lead to different results in acquisition (Loewen, 2012). To date this research has primarily been focused on comparing the effectiveness of the different types of feedback that naturally occur within language classrooms, as identified by such classic studies as Lyster and Ranta (1997; Bargiela, 2003). However, there are other possible approaches to feedback than those that naturally occur within the language classroom. One of these alternatives is the approach to feedback used in digital games. Similar to what is found in the field of SLA, within the field of digital game research it has been established that feedback is important for successful learning (Schell, 2008). Nevertheless, to date no research has been conducted which compares the SLA approach to feedback and the digital game approach to feedback in order to determine which would lead to better language acquisition within a digital game. Answering this question is the goal of the present dissertation. In order to answer this question I created two versions of a digital game, called âMundo de Comidaâ (MuCo) âWorld of Foodâ, which is designed to help novice Spanish learners acquire food vocabulary. One version of the game employs feedback strategies based on the most commonly employed feedback used in Spanish language classes, while the other uses feedback designed according to the most commonly used feedback mechanisms in commercial digital games. A comparison of the vocabulary gains according to feedback type allows us to see which type of feedback seems to help learners of Spanish acquire vocabulary within the context of MuCo. The findings indicate that MuCo does indeed help participants acquire food vocabulary. However, there is no significant difference in the effectiveness of the two different feedback types, which is likely due to the fact that both feedback types have been refined within their respective environments. Nevertheless, there is evidence to suggest that participants found the game that contained the digital game-style feedback to be more game-like than the other version. It was also found that, for several participants, MuCo did motivate them in the sense that they played more of the game than was required. Finally, there was no significant effect found for the participantsâ self-reported gaming habits, personalities, or motivation. These findings suggest that well-designed digital games can help learners acquire Spanish vocabulary, and that the impact of differences among participants is negligible when the game is well designed.Spanish and Portugues
- âŠ