27,858 research outputs found
A Deep Architecture for Semantic Matching with Multiple Positional Sentence Representations
Matching natural language sentences is central for many applications such as
information retrieval and question answering. Existing deep models rely on a
single sentence representation or multiple granularity representations for
matching. However, such methods cannot well capture the contextualized local
information in the matching process. To tackle this problem, we present a new
deep architecture to match two sentences with multiple positional sentence
representations. Specifically, each positional sentence representation is a
sentence representation at this position, generated by a bidirectional long
short term memory (Bi-LSTM). The matching score is finally produced by
aggregating interactions between these different positional sentence
representations, through -Max pooling and a multi-layer perceptron. Our
model has several advantages: (1) By using Bi-LSTM, rich context of the whole
sentence is leveraged to capture the contextualized local information in each
positional sentence representation; (2) By matching with multiple positional
sentence representations, it is flexible to aggregate different important
contextualized local information in a sentence to support the matching; (3)
Experiments on different tasks such as question answering and sentence
completion demonstrate the superiority of our model.Comment: Accepted by AAAI-201
Neural Generative Question Answering
This paper presents an end-to-end neural network model, named Neural
Generative Question Answering (GENQA), that can generate answers to simple
factoid questions, based on the facts in a knowledge-base. More specifically,
the model is built on the encoder-decoder framework for sequence-to-sequence
learning, while equipped with the ability to enquire the knowledge-base, and is
trained on a corpus of question-answer pairs, with their associated triples in
the knowledge-base. Empirical study shows the proposed model can effectively
deal with the variations of questions and answers, and generate right and
natural answers by referring to the facts in the knowledge-base. The experiment
on question answering demonstrates that the proposed model can outperform an
embedding-based QA model as well as a neural dialogue model trained on the same
data.Comment: Accepted by IJCAI 201
- …