114 research outputs found
Recommended from our members
Response Retrieval in Information-seeking Conversations
The increasing popularity of mobile Internet has led to several crucial changes in the way that people use search engines compared with traditional Web search on desktops. On one hand, there is limited output bandwidth with the small screen sizes of most mobile devices. Mobile Internet users prefer direct answers on the search engine result page (SERP). On the other hand, voice-based / text-based conversational interfaces are becoming increasing popular as shown in the wide adoption of intelligent assistant services and devices such as Amazon Echo, Microsoft Cortana and Google Assistant around the world. These important changes have triggered several new challenges that search engines have had to adapt to in order to better satisfy the information needs of mobile Internet users. In this dissertation, we investigate several aspects of single-turn answer retrieval and multi-turn information-seeking conversations to handle the new challenges of search on the mobile Internet.
We start from the research on single-turn answer retrieval and analyze the weaknesses of existing deep learning architectures for answer ranking. Then we propose an attention based neural matching model with a value-shared weighting scheme and attention mechanism to improve existing deep neural answer ranking models. Our proposed model achieves state-of-the-art performance for answer sentence retrieval compared with both feature engineering based methods and other neural models.
Then we move on to study response retrieval in multi-turn information-seeking conversations beyond single-turn interactions. Much research on response selection in conversation systems is modeling the matching patterns between user input message (either with context or not) and response candidates, which ignores external knowledge beyond the dialog utterances. We propose a learning framework on top of deep neural matching networks that leverages external knowledge with pseudo-relevance feedback and QA correspondence knowledge distillation for response retrieval. We also study how to integrate user intent modeling into neural ranking models to improve response retrieval performance. Finally, hybrid models of response retrieval and generation are investigated in order to combine the merits of these two different paradigms of conversation models.
Our goal is to develop effective learning models for answer retrieval and information-seeking conversations, in order to improve the effectiveness and user experience when accessing information with a touch screen interface or a conversational interface, as commonly adopted by millions of mobile Internet devices
ParaQG: A System for Generating Questions and Answers from Paragraphs
Generating syntactically and semantically valid and relevant questions from
paragraphs is useful with many applications. Manual generation is a
labour-intensive task, as it requires the reading, parsing and understanding of
long passages of text. A number of question generation models based on
sequence-to-sequence techniques have recently been proposed. Most of them
generate questions from sentences only, and none of them is publicly available
as an easy-to-use service. In this paper, we demonstrate ParaQG, a Web-based
system for generating questions from sentences and paragraphs. ParaQG
incorporates a number of novel functionalities to make the question generation
process user-friendly. It provides an interactive interface for a user to
select answers with visual insights on generation of questions. It also employs
various faceted views to group similar questions as well as filtering
techniques to eliminate unanswerable questionsComment: EMNLP 201
Attentive History Selection for Conversational Question Answering
Conversational question answering (ConvQA) is a simplified but concrete
setting of conversational search. One of its major challenges is to leverage
the conversation history to understand and answer the current question. In this
work, we propose a novel solution for ConvQA that involves three aspects.
First, we propose a positional history answer embedding method to encode
conversation history with position information using BERT in a natural way.
BERT is a powerful technique for text representation. Second, we design a
history attention mechanism (HAM) to conduct a "soft selection" for
conversation histories. This method attends to history turns with different
weights based on how helpful they are on answering the current question. Third,
in addition to handling conversation history, we take advantage of multi-task
learning (MTL) to do answer prediction along with another essential
conversation task (dialog act prediction) using a uniform model architecture.
MTL is able to learn more expressive and generic representations to improve the
performance of ConvQA. We demonstrate the effectiveness of our model with
extensive experimental evaluations on QuAC, a large-scale ConvQA dataset. We
show that position information plays an important role in conversation history
modeling. We also visualize the history attention and provide new insights into
conversation history understanding.Comment: Accepted to CIKM 201
QADiver: Interactive Framework for Diagnosing QA Models
Question answering (QA) extracting answers from text to the given question in
natural language, has been actively studied and existing models have shown a
promise of outperforming human performance when trained and evaluated with
SQuAD dataset. However, such performance may not be replicated in the actual
setting, for which we need to diagnose the cause, which is non-trivial due to
the complexity of model. We thus propose a web-based UI that provides how each
model contributes to QA performances, by integrating visualization and analysis
tools for model explanation. We expect this framework can help QA model
researchers to refine and improve their models.Comment: AAAI 2019 Demonstratio
Topic-Aware Response Generation in Task-Oriented Dialogue with Unstructured Knowledge Access
To alleviate the problem of structured databases' limited coverage, recent
task-oriented dialogue systems incorporate external unstructured knowledge to
guide the generation of system responses. However, these usually use word or
sentence level similarities to detect the relevant knowledge context, which
only partially capture the topical level relevance. In this paper, we examine
how to better integrate topical information in knowledge grounded task-oriented
dialogue and propose ``Topic-Aware Response Generation'' (TARG), an end-to-end
response generation model. TARG incorporates multiple topic-aware attention
mechanisms to derive the importance weighting scheme over dialogue utterances
and external knowledge sources towards a better understanding of the dialogue
history. Experimental results indicate that TARG achieves state-of-the-art
performance in knowledge selection and response generation, outperforming
previous state-of-the-art by 3.2, 3.6, and 4.2 points in EM, F1 and BLEU-4
respectively on Doc2Dial, and performing comparably with previous work on
DSTC9; both being knowledge-grounded task-oriented dialogue datasets.Comment: Findings of EMNLP 202
- …