1,657 research outputs found
Bilateral Multi-Perspective Matching for Natural Language Sentences
Natural language sentence matching is a fundamental technology for a variety
of tasks. Previous approaches either match sentences from a single direction or
only apply single granular (word-by-word or sentence-by-sentence) matching. In
this work, we propose a bilateral multi-perspective matching (BiMPM) model
under the "matching-aggregation" framework. Given two sentences and ,
our model first encodes them with a BiLSTM encoder. Next, we match the two
encoded sentences in two directions and . In
each matching direction, each time step of one sentence is matched against all
time-steps of the other sentence from multiple perspectives. Then, another
BiLSTM layer is utilized to aggregate the matching results into a fix-length
matching vector. Finally, based on the matching vector, the decision is made
through a fully connected layer. We evaluate our model on three tasks:
paraphrase identification, natural language inference and answer sentence
selection. Experimental results on standard benchmark datasets show that our
model achieves the state-of-the-art performance on all tasks.Comment: To appear in Proceedings of IJCAI 201
Tensorized Self-Attention: Efficiently Modeling Pairwise and Global Dependencies Together
Neural networks equipped with self-attention have parallelizable computation,
light-weight structure, and the ability to capture both long-range and local
dependencies. Further, their expressive power and performance can be boosted by
using a vector to measure pairwise dependency, but this requires to expand the
alignment matrix to a tensor, which results in memory and computation
bottlenecks. In this paper, we propose a novel attention mechanism called
"Multi-mask Tensorized Self-Attention" (MTSA), which is as fast and as
memory-efficient as a CNN, but significantly outperforms previous
CNN-/RNN-/attention-based models. MTSA 1) captures both pairwise (token2token)
and global (source2token) dependencies by a novel compatibility function
composed of dot-product and additive attentions, 2) uses a tensor to represent
the feature-wise alignment scores for better expressive power but only requires
parallelizable matrix multiplications, and 3) combines multi-head with
multi-dimensional attentions, and applies a distinct positional mask to each
head (subspace), so the memory and computation can be distributed to multiple
heads, each with sequential information encoded independently. The experiments
show that a CNN/RNN-free model based on MTSA achieves state-of-the-art or
competitive performance on nine NLP benchmarks with compelling memory- and
time-efficiency
- …