71,776 research outputs found
Understanding and exploiting user intent in community question answering
A number of Community Question Answering (CQA) services have emerged
and proliferated in the last decade. Typical examples include Yahoo! Answers,
WikiAnswers, and also domain-specific forums like StackOverflow. These services
help users obtain information from a community - a user can post his or her questions which may then be answered by other users. Such a paradigm of information seeking is particularly appealing when the question cannot be answered directly by Web search engines due to the unavailability of relevant online content. However, question submitted to a CQA service are often colloquial and ambiguous. An accurate understanding of the intent behind a question is important for satisfying the user's information need more effectively and efficiently.
In this thesis, we analyse the intent of each question in CQA by classifying
it into five dimensions, namely: subjectivity, locality, navigationality, procedurality,
and causality. By making use of advanced machine learning techniques, such
as Co-Training and PU-Learning, we are able to attain consistent and significant
classification improvements over the state-of-the-art in this area. In addition to
the textual features, a variety of metadata features (such as the category where
the question was posted to) are used to model a user's intent, which in turn help
the CQA service to perform better in finding similar questions, identifying relevant
answers, and recommending the most relevant answerers.
We validate the usefulness of user intent in two different CQA tasks. Our
first application is question retrieval, where we present a hybrid approach which
blends several language modelling techniques, namely, the classic (query-likelihood)
language model, the state-of-the-art translation-based language model, and our
proposed intent-based language model. Our second application is answer validation, where we present a two-stage model which first ranks similar questions by using
our proposed hybrid approach, and then validates whether the answer of the top
candidate can be served as an answer to a new question by leveraging sentiment
analysis, query quality assessment, and search lists validation
Understanding and exploiting user intent in community question answering
A number of Community Question Answering (CQA) services have emerged
and proliferated in the last decade. Typical examples include Yahoo! Answers,
WikiAnswers, and also domain-specific forums like StackOverflow. These services
help users obtain information from a community - a user can post his or her questions which may then be answered by other users. Such a paradigm of information seeking is particularly appealing when the question cannot be answered directly by Web search engines due to the unavailability of relevant online content. However, question submitted to a CQA service are often colloquial and ambiguous. An accurate understanding of the intent behind a question is important for satisfying the user's information need more effectively and efficiently.
In this thesis, we analyse the intent of each question in CQA by classifying
it into five dimensions, namely: subjectivity, locality, navigationality, procedurality,
and causality. By making use of advanced machine learning techniques, such
as Co-Training and PU-Learning, we are able to attain consistent and significant
classification improvements over the state-of-the-art in this area. In addition to
the textual features, a variety of metadata features (such as the category where
the question was posted to) are used to model a user's intent, which in turn help
the CQA service to perform better in finding similar questions, identifying relevant
answers, and recommending the most relevant answerers.
We validate the usefulness of user intent in two different CQA tasks. Our
first application is question retrieval, where we present a hybrid approach which
blends several language modelling techniques, namely, the classic (query-likelihood)
language model, the state-of-the-art translation-based language model, and our
proposed intent-based language model. Our second application is answer validation, where we present a two-stage model which first ranks similar questions by using
our proposed hybrid approach, and then validates whether the answer of the top
candidate can be served as an answer to a new question by leveraging sentiment
analysis, query quality assessment, and search lists validation
User Intent Prediction in Information-seeking Conversations
Conversational assistants are being progressively adopted by the general
population. However, they are not capable of handling complicated
information-seeking tasks that involve multiple turns of information exchange.
Due to the limited communication bandwidth in conversational search, it is
important for conversational assistants to accurately detect and predict user
intent in information-seeking conversations. In this paper, we investigate two
aspects of user intent prediction in an information-seeking setting. First, we
extract features based on the content, structural, and sentiment
characteristics of a given utterance, and use classic machine learning methods
to perform user intent prediction. We then conduct an in-depth feature
importance analysis to identify key features in this prediction task. We find
that structural features contribute most to the prediction performance. Given
this finding, we construct neural classifiers to incorporate context
information and achieve better performance without feature engineering. Our
findings can provide insights into the important factors and effective methods
of user intent prediction in information-seeking conversations.Comment: Accepted to CHIIR 201
What Users Ask a Search Engine: Analyzing One Billion Russian Question Queries
We analyze the question queries submitted to a large commercial web search engine to get insights about what people ask, and to better tailor the search results to the users’ needs. Based on a dataset of about one billion question queries submitted during the year 2012, we investigate askers’ querying behavior with the support of automatic query categorization. While the importance of question queries is likely to increase, at present they only make up 3–4% of the total search traffic. Since questions are such a small part of the query stream and are more likely to be unique than shorter queries, clickthrough information is typically rather sparse. Thus, query categorization methods based on the categories of clicked web documents do not work well for questions. As an alternative, we propose a robust question query classification method that uses the labeled questions from a large community question answering platform (CQA) as a training set. The resulting classifier is then transferred to the web search questions. Even though questions on CQA platforms tend to be different to web search questions, our categorization method proves competitive with strong baselines with respect to classification accuracy. To show the scalability of our proposed method we apply the classifiers to about one billion question queries and discuss the trade-offs between performance and accuracy that different classification models offer. Our findings reveal what people ask a search engine and also how this contrasts behavior on a CQA platform
Hi, how can I help you?: Automating enterprise IT support help desks
Question answering is one of the primary challenges of natural language
understanding. In realizing such a system, providing complex long answers to
questions is a challenging task as opposed to factoid answering as the former
needs context disambiguation. The different methods explored in the literature
can be broadly classified into three categories namely: 1) classification
based, 2) knowledge graph based and 3) retrieval based. Individually, none of
them address the need of an enterprise wide assistance system for an IT support
and maintenance domain. In this domain the variance of answers is large ranging
from factoid to structured operating procedures; the knowledge is present
across heterogeneous data sources like application specific documentation,
ticket management systems and any single technique for a general purpose
assistance is unable to scale for such a landscape. To address this, we have
built a cognitive platform with capabilities adopted for this domain. Further,
we have built a general purpose question answering system leveraging the
platform that can be instantiated for multiple products, technologies in the
support domain. The system uses a novel hybrid answering model that
orchestrates across a deep learning classifier, a knowledge graph based context
disambiguation module and a sophisticated bag-of-words search system. This
orchestration performs context switching for a provided question and also does
a smooth hand-off of the question to a human expert if none of the automated
techniques can provide a confident answer. This system has been deployed across
675 internal enterprise IT support and maintenance projects.Comment: To appear in IAAI 201
BERT with History Answer Embedding for Conversational Question Answering
Conversational search is an emerging topic in the information retrieval
community. One of the major challenges to multi-turn conversational search is
to model the conversation history to answer the current question. Existing
methods either prepend history turns to the current question or use complicated
attention mechanisms to model the history. We propose a conceptually simple yet
highly effective approach referred to as history answer embedding. It enables
seamless integration of conversation history into a conversational question
answering (ConvQA) model built on BERT (Bidirectional Encoder Representations
from Transformers). We first explain our view that ConvQA is a simplified but
concrete setting of conversational search, and then we provide a general
framework to solve ConvQA. We further demonstrate the effectiveness of our
approach under this framework. Finally, we analyze the impact of different
numbers of history turns under different settings to provide new insights into
conversation history modeling in ConvQA.Comment: Accepted to SIGIR 2019 as a short pape
Survey on Evaluation Methods for Dialogue Systems
In this paper we survey the methods and concepts developed for the evaluation
of dialogue systems. Evaluation is a crucial part during the development
process. Often, dialogue systems are evaluated by means of human evaluations
and questionnaires. However, this tends to be very cost and time intensive.
Thus, much work has been put into finding methods, which allow to reduce the
involvement of human labour. In this survey, we present the main concepts and
methods. For this, we differentiate between the various classes of dialogue
systems (task-oriented dialogue systems, conversational dialogue systems, and
question-answering dialogue systems). We cover each class by introducing the
main technologies developed for the dialogue systems and then by presenting the
evaluation methods regarding this class
Towards Query Logs for Privacy Studies: On Deriving Search Queries from Questions
Translating verbose information needs into crisp search queries is a
phenomenon that is ubiquitous but hardly understood. Insights into this process
could be valuable in several applications, including synthesizing large
privacy-friendly query logs from public Web sources which are readily available
to the academic research community. In this work, we take a step towards
understanding query formulation by tapping into the rich potential of community
question answering (CQA) forums. Specifically, we sample natural language (NL)
questions spanning diverse themes from the Stack Exchange platform, and conduct
a large-scale conversion experiment where crowdworkers submit search queries
they would use when looking for equivalent information. We provide a careful
analysis of this data, accounting for possible sources of bias during
conversion, along with insights into user-specific linguistic patterns and
search behaviors. We release a dataset of 7,000 question-query pairs from this
study to facilitate further research on query understanding.Comment: ECIR 2020 Short Pape
- …