693 research outputs found
Unsupervised Dependency Parsing: Let's Use Supervised Parsers
We present a self-training approach to unsupervised dependency parsing that
reuses existing supervised and unsupervised parsing algorithms. Our approach,
called `iterated reranking' (IR), starts with dependency trees generated by an
unsupervised parser, and iteratively improves these trees using the richer
probability models used in supervised parsing that are in turn trained on these
trees. Our system achieves 1.8% accuracy higher than the state-of-the-part
parser of Spitkovsky et al. (2013) on the WSJ corpus.Comment: 11 page
Compositional Distributional Semantics with Long Short Term Memory
We are proposing an extension of the recursive neural network that makes use
of a variant of the long short-term memory architecture. The extension allows
information low in parse trees to be stored in a memory register (the `memory
cell') and used much later higher up in the parse tree. This provides a
solution to the vanishing gradient problem and allows the network to capture
long range dependencies. Experimental results show that our composition
outperformed the traditional neural-network composition on the Stanford
Sentiment Treebank.Comment: 10 pages, 7 figure
Quantifying the vanishing gradient and long distance dependency problem in recursive neural networks and recursive LSTMs
Recursive neural networks (RNN) and their recently proposed extension
recursive long short term memory networks (RLSTM) are models that compute
representations for sentences, by recursively combining word embeddings
according to an externally provided parse tree. Both models thus, unlike
recurrent networks, explicitly make use of the hierarchical structure of a
sentence. In this paper, we demonstrate that RNNs nevertheless suffer from the
vanishing gradient and long distance dependency problem, and that RLSTMs
greatly improve over RNN's on these problems. We present an artificial learning
task that allows us to quantify the severity of these problems for both models.
We further show that a ratio of gradients (at the root node and a focal leaf
node) is highly indicative of the success of backpropagation at optimizing the
relevant weights low in the tree. This paper thus provides an explanation for
existing, superior results of RLSTMs on tasks such as sentiment analysis, and
suggests that the benefits of including hierarchical structure and of including
LSTM-style gating are complementary
Refinements of Miller's Algorithm over Weierstrass Curves Revisited
In 1986 Victor Miller described an algorithm for computing the Weil pairing
in his unpublished manuscript. This algorithm has then become the core of all
pairing-based cryptosystems. Many improvements of the algorithm have been
presented. Most of them involve a choice of elliptic curves of a \emph{special}
forms to exploit a possible twist during Tate pairing computation. Other
improvements involve a reduction of the number of iterations in the Miller's
algorithm. For the generic case, Blake, Murty and Xu proposed three refinements
to Miller's algorithm over Weierstrass curves. Though their refinements which
only reduce the total number of vertical lines in Miller's algorithm, did not
give an efficient computation as other optimizations, but they can be applied
for computing \emph{both} of Weil and Tate pairings on \emph{all}
pairing-friendly elliptic curves. In this paper we extend the Blake-Murty-Xu's
method and show how to perform an elimination of all vertical lines in Miller's
algorithm during Weil/Tate pairings computation on \emph{general} elliptic
curves. Experimental results show that our algorithm is faster about 25% in
comparison with the original Miller's algorithm.Comment: 17 page
RBF-based meshless modeling of strain localization and fracture
This work attempts to contribute further knowledge and understanding in the discipline of computational science in general and numerical modeling of discontinuity
problems in particular. Of particular interest is numerical simulation of dynamic strain localization and fracture problems. The distinguishing feature in this study is the employment of neural-networks-(RBF)-based meshfree
methods, which differentiates the present approach from many other computational approaches for numerical simulation of strain localization and fracture mechanics.
As a result, new meshfree methods based on RBF networks, namely moving RBF-based meshless methods, have been devised and developed for solving PDEs. Unlike the conventional RBF methods, the present moving RBF is locally supported and yields sparse, banded resultant matrices, and better condition numbers. The shape functions of the new method satisfy the Kroneckerdelta property, which facilitates the imposition of the essential boundary conditions.
In addition, the method is applicable to arbitrary domain and scattered nodes. To capture the characteristics of discontinuous problems, the method is further improved by special techniques including coordinate mapping and
local partition of unity enrichment. Results of simulation of strain localization and fracture, presented in the latter chapters of the thesis, indicate that the proposed meshless methods have been successfully applied to model such problems
Revisiting Unsupervised Relation Extraction
Unsupervised relation extraction (URE) extracts relations between named
entities from raw text without manually-labelled data and existing knowledge
bases (KBs). URE methods can be categorised into generative and discriminative
approaches, which rely either on hand-crafted features or surface form.
However, we demonstrate that by using only named entities to induce relation
types, we can outperform existing methods on two popular datasets. We conduct a
comparison and evaluation of our findings with other URE techniques, to
ascertain the important features in URE. We conclude that entity types provide
a strong inductive bias for URE.Comment: 8 pages, 1 figure, 2 tables. Accepted in ACL 202
- …