475 research outputs found
Improving Neural Relation Extraction with Positive and Unlabeled Learning
We present a novel approach to improve the performance of distant supervision
relation extraction with Positive and Unlabeled (PU) Learning. This approach
first applies reinforcement learning to decide whether a sentence is positive
to a given relation, and then positive and unlabeled bags are constructed. In
contrast to most previous studies, which mainly use selected positive instances
only, we make full use of unlabeled instances and propose two new
representations for positive and unlabeled bags. These two representations are
then combined in an appropriate way to make bag-level prediction. Experimental
results on a widely used real-world dataset demonstrate that this new approach
indeed achieves significant and consistent improvements as compared to several
competitive baselines.Comment: 8 pages, AAAI-202
Improving Neural Relation Extraction with Implicit Mutual Relations
Relation extraction (RE) aims at extracting the relation between two entities
from the text corpora. It is a crucial task for Knowledge Graph (KG)
construction. Most existing methods predict the relation between an entity pair
by learning the relation from the training sentences, which contain the
targeted entity pair. In contrast to existing distant supervision approaches
that suffer from insufficient training corpora to extract relations, our
proposal of mining implicit mutual relation from the massive unlabeled corpora
transfers the semantic information of entity pairs into the RE model, which is
more expressive and semantically plausible. After constructing an entity
proximity graph based on the implicit mutual relations, we preserve the
semantic relations of entity pairs via embedding each vertex of the graph into
a low-dimensional space. As a result, we can easily and flexibly integrate the
implicit mutual relations and other entity information, such as entity types,
into the existing RE methods.
Our experimental results on a New York Times and another Google Distant
Supervision datasets suggest that our proposed neural RE framework provides a
promising improvement for the RE task, and significantly outperforms the
state-of-the-art methods. Moreover, the component for mining implicit mutual
relations is so flexible that can help to improve the performance of both
CNN-based and RNN-based RE models significant.Comment: 12 page
Effective Piecewise CNN with attention mechanism for distant supervision on relation extraction task
- …