129,729 research outputs found
Survey on Evaluation Methods for Dialogue Systems
In this paper we survey the methods and concepts developed for the evaluation
of dialogue systems. Evaluation is a crucial part during the development
process. Often, dialogue systems are evaluated by means of human evaluations
and questionnaires. However, this tends to be very cost and time intensive.
Thus, much work has been put into finding methods, which allow to reduce the
involvement of human labour. In this survey, we present the main concepts and
methods. For this, we differentiate between the various classes of dialogue
systems (task-oriented dialogue systems, conversational dialogue systems, and
question-answering dialogue systems). We cover each class by introducing the
main technologies developed for the dialogue systems and then by presenting the
evaluation methods regarding this class
Improving Retrieval-Based Question Answering with Deep Inference Models
Question answering is one of the most important and difficult applications at
the border of information retrieval and natural language processing, especially
when we talk about complex science questions which require some form of
inference to determine the correct answer. In this paper, we present a two-step
method that combines information retrieval techniques optimized for question
answering with deep learning models for natural language inference in order to
tackle the multi-choice question answering in the science domain. For each
question-answer pair, we use standard retrieval-based models to find relevant
candidate contexts and decompose the main problem into two different
sub-problems. First, assign correctness scores for each candidate answer based
on the context using retrieval models from Lucene. Second, we use deep learning
architectures to compute if a candidate answer can be inferred from some
well-chosen context consisting of sentences retrieved from the knowledge base.
In the end, all these solvers are combined using a simple neural network to
predict the correct answer. This proposed two-step model outperforms the best
retrieval-based solver by over 3% in absolute accuracy.Comment: 8 pages, 2 figures, 8 tables, accepted at IJCNN 201
Robust Computer Algebra, Theorem Proving, and Oracle AI
In the context of superintelligent AI systems, the term "oracle" has two
meanings. One refers to modular systems queried for domain-specific tasks.
Another usage, referring to a class of systems which may be useful for
addressing the value alignment and AI control problems, is a superintelligent
AI system that only answers questions. The aim of this manuscript is to survey
contemporary research problems related to oracles which align with long-term
research goals of AI safety. We examine existing question answering systems and
argue that their high degree of architectural heterogeneity makes them poor
candidates for rigorous analysis as oracles. On the other hand, we identify
computer algebra systems (CASs) as being primitive examples of domain-specific
oracles for mathematics and argue that efforts to integrate computer algebra
systems with theorem provers, systems which have largely been developed
independent of one another, provide a concrete set of problems related to the
notion of provable safety that has emerged in the AI safety community. We
review approaches to interfacing CASs with theorem provers, describe
well-defined architectural deficiencies that have been identified with CASs,
and suggest possible lines of research and practical software projects for
scientists interested in AI safety.Comment: 15 pages, 3 figure
Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering
We propose an unsupervised strategy for the selection of justification
sentences for multi-hop question answering (QA) that (a) maximizes the
relevance of the selected sentences, (b) minimizes the overlap between the
selected facts, and (c) maximizes the coverage of both question and answer.
This unsupervised sentence selection method can be coupled with any supervised
QA approach. We show that the sentences selected by our method improve the
performance of a state-of-the-art supervised QA model on two multi-hop QA
datasets: AI2's Reasoning Challenge (ARC) and Multi-Sentence Reading
Comprehension (MultiRC). We obtain new state-of-the-art performance on both
datasets among approaches that do not use external resources for training the
QA system: 56.82% F1 on ARC (41.24% on Challenge and 64.49% on Easy) and 26.1%
EM0 on MultiRC. Our justification sentences have higher quality than the
justifications selected by a strong information retrieval baseline, e.g., by
5.4% F1 in MultiRC. We also show that our unsupervised selection of
justification sentences is more stable across domains than a state-of-the-art
supervised sentence selection method.Comment: Published at EMNLP-IJCNLP 2019 as long conference paper. Corrected
the name reference for Speer et.al, 201
- …