28,422 research outputs found
Learning to Rank Question Answer Pairs with Holographic Dual LSTM Architecture
We describe a new deep learning architecture for learning to rank question
answer pairs. Our approach extends the long short-term memory (LSTM) network
with holographic composition to model the relationship between question and
answer representations. As opposed to the neural tensor layer that has been
adopted recently, the holographic composition provides the benefits of scalable
and rich representational learning approach without incurring huge parameter
costs. Overall, we present Holographic Dual LSTM (HD-LSTM), a unified
architecture for both deep sentence modeling and semantic matching.
Essentially, our model is trained end-to-end whereby the parameters of the LSTM
are optimized in a way that best explains the correlation between question and
answer representations. In addition, our proposed deep learning architecture
requires no extensive feature engineering. Via extensive experiments, we show
that HD-LSTM outperforms many other neural architectures on two popular
benchmark QA datasets. Empirical studies confirm the effectiveness of
holographic composition over the neural tensor layer.Comment: SIGIR 2017 Full Pape
CoaCor: Code Annotation for Code Retrieval with Reinforcement Learning
To accelerate software development, much research has been performed to help
people understand and reuse the huge amount of available code resources. Two
important tasks have been widely studied: code retrieval, which aims to
retrieve code snippets relevant to a given natural language query from a code
base, and code annotation, where the goal is to annotate a code snippet with a
natural language description. Despite their advancement in recent years, the
two tasks are mostly explored separately. In this work, we investigate a novel
perspective of Code annotation for Code retrieval (hence called `CoaCor'),
where a code annotation model is trained to generate a natural language
annotation that can represent the semantic meaning of a given code snippet and
can be leveraged by a code retrieval model to better distinguish relevant code
snippets from others. To this end, we propose an effective framework based on
reinforcement learning, which explicitly encourages the code annotation model
to generate annotations that can be used for the retrieval task. Through
extensive experiments, we show that code annotations generated by our framework
are much more detailed and more useful for code retrieval, and they can further
improve the performance of existing code retrieval models significantly.Comment: 10 pages, 2 figures. Accepted by The Web Conference (WWW) 201
A Deep Architecture for Semantic Matching with Multiple Positional Sentence Representations
Matching natural language sentences is central for many applications such as
information retrieval and question answering. Existing deep models rely on a
single sentence representation or multiple granularity representations for
matching. However, such methods cannot well capture the contextualized local
information in the matching process. To tackle this problem, we present a new
deep architecture to match two sentences with multiple positional sentence
representations. Specifically, each positional sentence representation is a
sentence representation at this position, generated by a bidirectional long
short term memory (Bi-LSTM). The matching score is finally produced by
aggregating interactions between these different positional sentence
representations, through -Max pooling and a multi-layer perceptron. Our
model has several advantages: (1) By using Bi-LSTM, rich context of the whole
sentence is leveraged to capture the contextualized local information in each
positional sentence representation; (2) By matching with multiple positional
sentence representations, it is flexible to aggregate different important
contextualized local information in a sentence to support the matching; (3)
Experiments on different tasks such as question answering and sentence
completion demonstrate the superiority of our model.Comment: Accepted by AAAI-201
- …