2 research outputs found
How to Evaluate your Question Answering System Every Day and Still Get Real Work Done
In this paper, we report on Qaviar, an experimental automated evaluation
system for question answering applications. The goal of our research was to
find an automatically calculated measure that correlates well with human
judges' assessment of answer correctness in the context of question answering
tasks. Qaviar judges the response by computing recall against the stemmed
content words in the human-generated answer key. It counts the answer correct
if it exceeds agiven recall threshold. We determined that the answer
correctness predicted by Qaviar agreed with the human 93% to 95% of the time.
41 question-answering systems were ranked by both Qaviar and human assessors,
and these rankings correlated with a Kendall's Tau measure of 0.920, compared
to a correlation of 0.956 between human assessors on the same data.Comment: 6 pages, 3 figures, to appear in Proceedings of the Second
International Conference on Language Resources and Evaluation (LREC 2000
SEMONTOQA: A Semantic Understanding-Based Ontological Framework for Factoid Question Answering
This paper presents an outline of an Ontological and Se-
mantic understanding-based model (SEMONTOQA) for an
open-domain factoid Question Answering (QA) system. The
outlined model analyses unstructured English natural lan-
guage texts to a vast extent and represents the inherent con-
tents in an ontological manner. The model locates and ex-
tracts useful information from the text for various question
types and builds a semantically rich knowledge-base that
is capable of answering different categories of factoid ques-
tions. The system model converts the unstructured texts
into a minimalistic, labelled, directed graph that we call a
Syntactic Sentence Graph (SSG). An Automatic Text In-
terpreter using a set of pre-learnt Text Interpretation Sub-
graphs and patterns tries to understand the contents of the
SSG in a semantic way. The system proposes a new fea-
ture and action based Cognitive Entity-Relationship Net-
work designed to extend the text understanding process to
an in-depth level. Application of supervised learning allows
the system to gradually grow its capability to understand
the text in a more fruitful manner. The system incorpo-
rates an effective Text Inference Engine which takes the re-
sponsibility of inferring the text contents and isolating enti-
ties, their features, actions, objects, associated contexts and
other properties, required for answering questions. A similar
understanding-based question processing module interprets
the user’s need in a semantic way. An Ontological Mapping
Module, with the help of a set of pre-defined strategies de-
signed for different classes of questions, is able to perform
a mapping between a question’s ontology with the set of
ontologies stored in the background knowledge-base. Em-
pirical verification is performed to show the usability of the
proposed model. The results achieved show that, this model
can be used effectively as a semantic understanding based
alternative QA system