6,418 research outputs found
What can we learn from Semantic Tagging?
We investigate the effects of multi-task learning using the recently
introduced task of semantic tagging. We employ semantic tagging as an auxiliary
task for three different NLP tasks: part-of-speech tagging, Universal
Dependency parsing, and Natural Language Inference. We compare full neural
network sharing, partial neural network sharing, and what we term the learning
what to share setting where negative transfer between tasks is less likely. Our
findings show considerable improvements for all tasks, particularly in the
learning what to share setting, which shows consistent gains across all tasks.Comment: 9 pages with references and appendixes. EMNLP 2018 camera read
POS Tagging and its Applications for Mathematics
Content analysis of scientific publications is a nontrivial task, but a
useful and important one for scientific information services. In the Gutenberg
era it was a domain of human experts; in the digital age many machine-based
methods, e.g., graph analysis tools and machine-learning techniques, have been
developed for it. Natural Language Processing (NLP) is a powerful
machine-learning approach to semiautomatic speech and language processing,
which is also applicable to mathematics. The well established methods of NLP
have to be adjusted for the special needs of mathematics, in particular for
handling mathematical formulae. We demonstrate a mathematics-aware part of
speech tagger and give a short overview about our adaptation of NLP methods for
mathematical publications. We show the use of the tools developed for key
phrase extraction and classification in the database zbMATH
Scientific Information Extraction with Semi-supervised Neural Tagging
This paper addresses the problem of extracting keyphrases from scientific
articles and categorizing them as corresponding to a task, process, or
material. We cast the problem as sequence tagging and introduce semi-supervised
methods to a neural tagging model, which builds on recent advances in named
entity recognition. Since annotated training data is scarce in this domain, we
introduce a graph-based semi-supervised algorithm together with a data
selection scheme to leverage unannotated articles. Both inductive and
transductive semi-supervised learning strategies outperform state-of-the-art
information extraction performance on the 2017 SemEval Task 10 ScienceIE task.Comment: accepted by EMNLP 201
Domain Adaptation for Statistical Classifiers
The most basic assumption used in statistical learning theory is that
training data and test data are drawn from the same underlying distribution.
Unfortunately, in many applications, the "in-domain" test data is drawn from a
distribution that is related, but not identical, to the "out-of-domain"
distribution of the training data. We consider the common case in which labeled
out-of-domain data is plentiful, but labeled in-domain data is scarce. We
introduce a statistical formulation of this problem in terms of a simple
mixture model and present an instantiation of this framework to maximum entropy
classifiers and their linear chain counterparts. We present efficient inference
algorithms for this special case based on the technique of conditional
expectation maximization. Our experimental results show that our approach leads
to improved performance on three real world tasks on four different data sets
from the natural language processing domain
- …