897 research outputs found
Look before you Hop: Conversational Question Answering over Knowledge Graphs Using Judicious Context Expansion
Fact-centric information needs are rarely one-shot; users typically ask follow-up questions to explore a topic. In such a conversational setting, the user's inputs are often incomplete, with entities or predicates left out, and ungrammatical phrases. This poses a huge challenge to question answering (QA) systems that typically rely on cues in full-fledged interrogative sentences. As a solution, we develop CONVEX: an unsupervised method that can answer incomplete questions over a knowledge graph (KG) by maintaining conversation context using entities and predicates seen so far and automatically inferring missing or ambiguous pieces for follow-up questions. The core of our method is a graph exploration algorithm that judiciously expands a frontier to find candidate answers for the current question. To evaluate CONVEX, we release ConvQuestions, a crowdsourced benchmark with 11,200 distinct conversations from five different domains. We show that CONVEX: (i) adds conversational support to any stand-alone QA system, and (ii) outperforms state-of-the-art baselines and question completion strategies
A Controllable Model of Grounded Response Generation
Current end-to-end neural conversation models inherently lack the flexibility
to impose semantic control in the response generation process, often resulting
in uninteresting responses. Attempts to boost informativeness alone come at the
expense of factual accuracy, as attested by pretrained language models'
propensity to "hallucinate" facts. While this may be mitigated by access to
background knowledge, there is scant guarantee of relevance and informativeness
in generated responses. We propose a framework that we call controllable
grounded response generation (CGRG), in which lexical control phrases are
either provided by a user or automatically extracted by a control phrase
predictor from dialogue context and grounding knowledge. Quantitative and
qualitative results show that, using this framework, a transformer based model
with a novel inductive attention mechanism, trained on a conversation-like
Reddit dataset, outperforms strong generation baselines.Comment: AAAI 202
He Said, She Said: Style Transfer for Shifting the Perspective of Dialogues
In this work, we define a new style transfer task: perspective shift, which
reframes a dialogue from informal first person to a formal third person
rephrasing of the text. This task requires challenging coreference resolution,
emotion attribution, and interpretation of informal text. We explore several
baseline approaches and discuss further directions on this task when applied to
short dialogues. As a sample application, we demonstrate that applying
perspective shifting to a dialogue summarization dataset (SAMSum) substantially
improves the zero-shot performance of extractive news summarization models on
this data. Additionally, supervised extractive models perform better when
trained on perspective shifted data than on the original dialogues. We release
our code publicly.Comment: Findings of EMNLP 2022, 18 page
CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning
Compared to standard retrieval tasks, passage retrieval for conversational
question answering (CQA) poses new challenges in understanding the current user
question, as each question needs to be interpreted within the dialogue context.
Moreover, it can be expensive to re-train well-established retrievers such as
search engines that are originally developed for non-conversational queries. To
facilitate their use, we develop a query rewriting model CONQRR that rewrites a
conversational question in the context into a standalone question. It is
trained with a novel reward function to directly optimize towards retrieval
using reinforcement learning and can be adapted to any off-the-shelf retriever.
We show that CONQRR achieves state-of-the-art results on a recent open-domain
CQA dataset containing conversations from three different sources, and is
effective for two different off-the-shelf retrievers. Our extensive analysis
also shows the robustness of CONQRR to out-of-domain dialogues as well as to
zero query rewriting supervision
- …