386 research outputs found
Finding Structured and Unstructured Features to Improve the Search Result of Complex Question
-Recently, search engine got challenge deal with such a natural language questions.
Sometimes, these questions are complex questions. A complex question is a question that
consists several clauses, several intentions or need long answer.
In this work we proposed that finding structured features and unstructured features of
questions and using structured data and unstructured data could improve the search result
of complex questions. According to those, we will use two approaches, IR approach and
structured retrieval, QA template.
Our framework consists of three parts. Question analysis, Resource Discovery and
Analysis The Relevant Answer. In Question Analysis we used a few assumptions, and
tried to find structured and unstructured features of the questions. Structured feature
refers to Structured data and unstructured feature refers to unstructured data. In the
resource discovery we integrated structured data (relational database) and unstructured
data (webpage) to take the advantaged of two kinds of data to improve and reach the
relevant answer. We will find the best top fragments from context of the webpage In the
Relevant Answer part, we made a score matching between the result from structured data
and unstructured data, then finally used QA template to reformulate the question.
In the experiment result, it shows that using structured feature and unstructured
feature and using both structured and unstructured data, using approach IR and QA
template could improve the search result of complex questions
Hi, how can I help you?: Automating enterprise IT support help desks
Question answering is one of the primary challenges of natural language
understanding. In realizing such a system, providing complex long answers to
questions is a challenging task as opposed to factoid answering as the former
needs context disambiguation. The different methods explored in the literature
can be broadly classified into three categories namely: 1) classification
based, 2) knowledge graph based and 3) retrieval based. Individually, none of
them address the need of an enterprise wide assistance system for an IT support
and maintenance domain. In this domain the variance of answers is large ranging
from factoid to structured operating procedures; the knowledge is present
across heterogeneous data sources like application specific documentation,
ticket management systems and any single technique for a general purpose
assistance is unable to scale for such a landscape. To address this, we have
built a cognitive platform with capabilities adopted for this domain. Further,
we have built a general purpose question answering system leveraging the
platform that can be instantiated for multiple products, technologies in the
support domain. The system uses a novel hybrid answering model that
orchestrates across a deep learning classifier, a knowledge graph based context
disambiguation module and a sophisticated bag-of-words search system. This
orchestration performs context switching for a provided question and also does
a smooth hand-off of the question to a human expert if none of the automated
techniques can provide a confident answer. This system has been deployed across
675 internal enterprise IT support and maintenance projects.Comment: To appear in IAAI 201
Follow-up question handling in the IMIX and Ritel systems: A comparative study
One of the basic topics of question answering (QA) dialogue systems is how follow-up questions should be interpreted by a QA system. In this paper, we shall discuss our experience with the IMIX and Ritel systems, for both of which a follow-up question handling scheme has been developed, and corpora have been collected. These two systems are each other's opposites in many respects: IMIX is multimodal, non-factoid, black-box QA, while Ritel is speech, factoid, keyword-based QA. Nevertheless, we will show that they are quite comparable, and that it is fruitful to examine the similarities and differences. We shall look at how the systems are composed, and how real, non-expert, users interact with the systems. We shall also provide comparisons with systems from the literature where possible, and indicate where open issues lie and in what areas existing systems may be improved. We conclude that most systems have a common architecture with a set of common subtasks, in particular detecting follow-up questions and finding referents for them. We characterise these tasks using the typical techniques used for performing them, and data from our corpora. We also identify a special type of follow-up question, the discourse question, which is asked when the user is trying to understand an answer, and propose some basic methods for handling it
Dublin City University at QA@CLEF 2008
We describe our participation in Multilingual Question Answering at CLEF 2008 using German and English as our source and target languages respectively. The system was built using UIMA (Unstructured Information Management Architecture) as underlying framework
- …