4,542 research outputs found
Recurrent Memory Networks for Language Modeling
Recurrent Neural Networks (RNN) have obtained excellent result in many
natural language processing (NLP) tasks. However, understanding and
interpreting the source of this success remains a challenge. In this paper, we
propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only
amplifies the power of RNN but also facilitates our understanding of its
internal functioning and allows us to discover underlying patterns in data. We
demonstrate the power of RMN on language modeling and sentence completion
tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM)
network on three large German, Italian, and English dataset. Additionally we
perform in-depth analysis of various linguistic dimensions that RMN captures.
On Sentence Completion Challenge, for which it is essential to capture sentence
coherence, our RMN obtains 69.2% accuracy, surpassing the previous
state-of-the-art by a large margin.Comment: 8 pages, 6 figures. Accepted at NAACL 201
Structural Embedding of Syntactic Trees for Machine Comprehension
Deep neural networks for machine comprehension typically utilizes only word
or character embeddings without explicitly taking advantage of structured
linguistic information such as constituency trees and dependency trees. In this
paper, we propose structural embedding of syntactic trees (SEST), an algorithm
framework to utilize structured information and encode them into vector
representations that can boost the performance of algorithms for the machine
comprehension. We evaluate our approach using a state-of-the-art neural
attention model on the SQuAD dataset. Experimental results demonstrate that our
model can accurately identify the syntactic boundaries of the sentences and
extract answers that are syntactically coherent over the baseline methods
- …