6,108 research outputs found
Adversarial training for multi-context joint entity and relation extraction
Adversarial training (AT) is a regularization method that can be used to
improve the robustness of neural network methods by adding small perturbations
in the training data. We show how to use AT for the tasks of entity recognition
and relation extraction. In particular, we demonstrate that applying AT to a
general purpose baseline model for jointly extracting entities and relations,
allows improving the state-of-the-art effectiveness on several datasets in
different contexts (i.e., news, biomedical, and real estate data) and for
different languages (English and Dutch).Comment: EMNLP 2018, code is available at
https://github.com/bekou/multihead_joint_entity_relation_extractio
Modeling relation paths for knowledge base completion via joint adversarial training
Knowledge Base Completion (KBC), which aims at determining the missing
relations between entity pairs, has received increasing attention in recent
years. Most existing KBC methods focus on either embedding the Knowledge Base
(KB) into a specific semantic space or leveraging the joint probability of
Random Walks (RWs) on multi-hop paths. Only a few unified models take both
semantic and path-related features into consideration with adequacy. In this
paper, we propose a novel method to explore the intrinsic relationship between
the single relation (i.e. 1-hop path) and multi-hop paths between paired
entities. We use Hierarchical Attention Networks (HANs) to select important
relations in multi-hop paths and encode them into low-dimensional vectors. By
treating relations and multi-hop paths as two different input sources, we use a
feature extractor, which is shared by two downstream components (i.e. relation
classifier and source discriminator), to capture shared/similar information
between them. By joint adversarial training, we encourage our model to extract
features from the multi-hop paths which are representative for relation
completion. We apply the trained model (except for the source discriminator) to
several large-scale KBs for relation completion. Experimental results show that
our method outperforms existing path information-based approaches. Since each
sub-module of our model can be well interpreted, our model can be applied to a
large number of relation learning tasks.Comment: Accepted by Knowledge-Based System
Adversarial Sets for Regularising Neural Link Predictors
In adversarial training, a set of models learn together by pursuing competing
goals, usually defined on single data instances. However, in relational
learning and other non-i.i.d domains, goals can also be defined over sets of
instances. For example, a link predictor for the is-a relation needs to be
consistent with the transitivity property: if is-a(x_1, x_2) and is-a(x_2, x_3)
hold, is-a(x_1, x_3) needs to hold as well. Here we use such assumptions for
deriving an inconsistency loss, measuring the degree to which the model
violates the assumptions on an adversarially-generated set of examples. The
training objective is defined as a minimax problem, where an adversary finds
the most offending adversarial examples by maximising the inconsistency loss,
and the model is trained by jointly minimising a supervised loss and the
inconsistency loss on the adversarial examples. This yields the first method
that can use function-free Horn clauses (as in Datalog) to regularise any
neural link predictor, with complexity independent of the domain size. We show
that for several link prediction models, the optimisation problem faced by the
adversary has efficient closed-form solutions. Experiments on link prediction
benchmarks indicate that given suitable prior knowledge, our method can
significantly improve neural link predictors on all relevant metrics.Comment: Proceedings of the 33rd Conference on Uncertainty in Artificial
Intelligence (UAI), 201
Robust Multilingual Part-of-Speech Tagging via Adversarial Training
Adversarial training (AT) is a powerful regularization method for neural
networks, aiming to achieve robustness to input perturbations. Yet, the
specific effects of the robustness obtained from AT are still unclear in the
context of natural language processing. In this paper, we propose and analyze a
neural POS tagging model that exploits AT. In our experiments on the Penn
Treebank WSJ corpus and the Universal Dependencies (UD) dataset (27 languages),
we find that AT not only improves the overall tagging accuracy, but also 1)
prevents over-fitting well in low resource languages and 2) boosts tagging
accuracy for rare / unseen words. We also demonstrate that 3) the improved
tagging performance by AT contributes to the downstream task of dependency
parsing, and that 4) AT helps the model to learn cleaner word representations.
5) The proposed AT model is generally effective in different sequence labeling
tasks. These positive results motivate further use of AT for natural language
tasks.Comment: NAACL 201
- …