Conversational question answering (ConvQA) is a simplified but concrete
setting of conversational search. One of its major challenges is to leverage
the conversation history to understand and answer the current question. In this
work, we propose a novel solution for ConvQA that involves three aspects.
First, we propose a positional history answer embedding method to encode
conversation history with position information using BERT in a natural way.
BERT is a powerful technique for text representation. Second, we design a
history attention mechanism (HAM) to conduct a "soft selection" for
conversation histories. This method attends to history turns with different
weights based on how helpful they are on answering the current question. Third,
in addition to handling conversation history, we take advantage of multi-task
learning (MTL) to do answer prediction along with another essential
conversation task (dialog act prediction) using a uniform model architecture.
MTL is able to learn more expressive and generic representations to improve the
performance of ConvQA. We demonstrate the effectiveness of our model with
extensive experimental evaluations on QuAC, a large-scale ConvQA dataset. We
show that position information plays an important role in conversation history
modeling. We also visualize the history attention and provide new insights into
conversation history understanding.Comment: Accepted to CIKM 201