11,080 research outputs found
Unsupervised Dependency Parsing: Let's Use Supervised Parsers
We present a self-training approach to unsupervised dependency parsing that
reuses existing supervised and unsupervised parsing algorithms. Our approach,
called `iterated reranking' (IR), starts with dependency trees generated by an
unsupervised parser, and iteratively improves these trees using the richer
probability models used in supervised parsing that are in turn trained on these
trees. Our system achieves 1.8% accuracy higher than the state-of-the-part
parser of Spitkovsky et al. (2013) on the WSJ corpus.Comment: 11 page
Dependency Parsing with Dilated Iterated Graph CNNs
Dependency parses are an effective way to inject linguistic knowledge into
many downstream tasks, and many practitioners wish to efficiently parse
sentences at scale. Recent advances in GPU hardware have enabled neural
networks to achieve significant gains over the previous best models, these
models still fail to leverage GPUs' capability for massive parallelism due to
their requirement of sequential processing of the sentence. In response, we
propose Dilated Iterated Graph Convolutional Neural Networks (DIG-CNNs) for
graph-based dependency parsing, a graph convolutional architecture that allows
for efficient end-to-end GPU parsing. In experiments on the English Penn
TreeBank benchmark, we show that DIG-CNNs perform on par with some of the best
neural network parsers.Comment: 2nd Workshop on Structured Prediction for Natural Language Processing
(at EMNLP '17
Parameterized Neural Network Language Models for Information Retrieval
Information Retrieval (IR) models need to deal with two difficult issues,
vocabulary mismatch and term dependencies. Vocabulary mismatch corresponds to
the difficulty of retrieving relevant documents that do not contain exact query
terms but semantically related terms. Term dependencies refers to the need of
considering the relationship between the words of the query when estimating the
relevance of a document. A multitude of solutions has been proposed to solve
each of these two problems, but no principled model solve both. In parallel, in
the last few years, language models based on neural networks have been used to
cope with complex natural language processing tasks like emotion and paraphrase
detection. Although they present good abilities to cope with both term
dependencies and vocabulary mismatch problems, thanks to the distributed
representation of words they are based upon, such models could not be used
readily in IR, where the estimation of one language model per document (or
query) is required. This is both computationally unfeasible and prone to
over-fitting. Based on a recent work that proposed to learn a generic language
model that can be modified through a set of document-specific parameters, we
explore use of new neural network models that are adapted to ad-hoc IR tasks.
Within the language model IR framework, we propose and study the use of a
generic language model as well as a document-specific language model. Both can
be used as a smoothing component, but the latter is more adapted to the
document at hand and has the potential of being used as a full document
language model. We experiment with such models and analyze their results on
TREC-1 to 8 datasets
Incremental construction of LSTM recurrent neural network
Long Short--Term Memory (LSTM) is a recurrent neural network that
uses structures called memory blocks to allow the net remember
significant events distant in the past input sequence in order to
solve long time lag tasks, where other RNN approaches fail.
Throughout this work we have performed experiments using LSTM
networks extended with growing abilities, which we call GLSTM.
Four methods of training growing LSTM has been compared. These
methods include cascade and fully connected hidden layers as well
as two different levels of freezing previous weights in the
cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five
controllers of the Central Nervous System control has to be
modelled. We have compared growing LSTM results against other
neural networks approaches, and our work applying conventional
LSTM to the task at hand.Postprint (published version
- …