21,432 research outputs found
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques
specifically developed for analyzing and understanding the inner-workings and
representations acquired by neural models of language. Approaches included:
systematic manipulation of input to neural networks and investigating the
impact on their performance, testing whether interpretable knowledge can be
decoded from intermediate representations acquired by neural networks,
proposing modifications to neural network architectures to make their knowledge
state or generated output more explainable, and examining the performance of
networks on simplified or formal languages. Here we review a number of
representative studies in each category
Self-Adaptive Hierarchical Sentence Model
The ability to accurately model a sentence at varying stages (e.g.,
word-phrase-sentence) plays a central role in natural language processing. As
an effort towards this goal we propose a self-adaptive hierarchical sentence
model (AdaSent). AdaSent effectively forms a hierarchy of representations from
words to phrases and then to sentences through recursive gated local
composition of adjacent segments. We design a competitive mechanism (through
gating networks) to allow the representations of the same sentence to be
engaged in a particular learning task (e.g., classification), therefore
effectively mitigating the gradient vanishing problem persistent in other
recursive models. Both qualitative and quantitative analysis shows that AdaSent
can automatically form and select the representations suitable for the task at
hand during training, yielding superior classification performance over
competitor models on 5 benchmark data sets.Comment: 8 pages, 7 figures, accepted as a full paper at IJCAI 201
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time,
Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with
a more complex computational unit, have obtained strong results on a variety of
sequence modeling tasks. The only underlying LSTM structure that has been
explored so far is a linear chain. However, natural language exhibits syntactic
properties that would naturally combine words to phrases. We introduce the
Tree-LSTM, a generalization of LSTMs to tree-structured network topologies.
Tree-LSTMs outperform all existing systems and strong LSTM baselines on two
tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task
1) and sentiment classification (Stanford Sentiment Treebank).Comment: Accepted for publication at ACL 201
Restricted Recurrent Neural Networks
Recurrent Neural Network (RNN) and its variations such as Long Short-Term
Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building
blocks for learning online data of sequential nature in many research areas,
including natural language processing and speech data analysis. In this paper,
we present a new methodology to significantly reduce the number of parameters
in RNNs while maintaining performance that is comparable or even better than
classical RNNs. The new proposal, referred to as Restricted Recurrent Neural
Network (RRNN), restricts the weight matrices corresponding to the input data
and hidden states at each time step to share a large proportion of parameters.
The new architecture can be regarded as a compression of its classical
counterpart, but it does not require pre-training or sophisticated parameter
fine-tuning, both of which are major issues in most existing compression
techniques. Experiments on natural language modeling show that compared with
its classical counterpart, the restricted recurrent architecture generally
produces comparable results at about 50\% compression rate. In particular, the
Restricted LSTM can outperform classical RNN with even less number of
parameters
- …