2,178 research outputs found
A refined version of general E-unification
Transformation--based systems for general E-unification were first investigated by Gallier and Snyder. Their system extends the well--known rules for syntactic unification by Lazy Paramodulation, thus coping with the equational theory. More recently, Dougherty and Johann improved on this method by giving a restriction of the Lazy Paramodulation inferences. In this paper, we show that their system can be further improved by a stronger restriction on the applicability of Lazy Paramodulation. It turns out that the framework of proof transformations provides an elegant and natural means for proving completeness of the inference system
Towards Neural Machine Translation with Latent Tree Attention
Building models that take advantage of the hierarchical structure of language
without a priori annotation is a longstanding goal in natural language
processing. We introduce such a model for the task of machine translation,
pairing a recurrent neural network grammar encoder with a novel attentional
RNNG decoder and applying policy gradient reinforcement learning to induce
unsupervised tree structures on both the source and target. When trained on
character-level datasets with no explicit segmentation or parse annotation, the
model learns a plausible segmentation and shallow parse, obtaining performance
close to an attentional baseline.Comment: Presented at SPNLP 201
Learning when to skim and when to read
Many recent advances in deep learning for natural language processing have
come at increasing computational cost, but the power of these state-of-the-art
models is not needed for every example in a dataset. We demonstrate two
approaches to reducing unnecessary computation in cases where a fast but weak
baseline classier and a stronger, slower model are both available. Applying an
AUC-based metric to the task of sentiment classification, we find significant
efficiency gains with both a probability-threshold method for reducing
computational cost and one that uses a secondary decision network.Comment: 8 pages (4 article, 1 references, 3 appendix), 11 figures, 3 tables,
published at ACL2017 workshop Repl4NL
- …