9,734 research outputs found
Entity Structure Within and Throughout: Modeling Mention Dependencies for Document-Level Relation Extraction
Entities, as the essential elements in relation extraction tasks, exhibit
certain structure. In this work, we formulate such structure as distinctive
dependencies between mention pairs. We then propose SSAN, which incorporates
these structural dependencies within the standard self-attention mechanism and
throughout the overall encoding stage. Specifically, we design two alternative
transformation modules inside each self-attention building block to produce
attentive biases so as to adaptively regularize its attention flow. Our
experiments demonstrate the usefulness of the proposed entity structure and the
effectiveness of SSAN. It significantly outperforms competitive baselines,
achieving new state-of-the-art results on three popular document-level relation
extraction datasets. We further provide ablation and visualization to show how
the entity structure guides the model for better relation extraction. Our code
is publicly available.Comment: Accepted to AAAI 202
Denoising Relation Extraction from Document-level Distant Supervision
Distant supervision (DS) has been widely used to generate auto-labeled data
for sentence-level relation extraction (RE), which improves RE performance.
However, the existing success of DS cannot be directly transferred to the more
challenging document-level relation extraction (DocRE), since the inherent
noise in DS may be even multiplied in document level and significantly harm the
performance of RE. To address this challenge, we propose a novel pre-trained
model for DocRE, which denoises the document-level DS data via multiple
pre-training tasks. Experimental results on the large-scale DocRE benchmark
show that our model can capture useful information from noisy DS data and
achieve promising results.Comment: EMNLP 2020 short pape
- …