17 research outputs found
Improving Fine-grained Entity Typing with Entity Linking
Fine-grained entity typing is a challenging problem since it usually involves
a relatively large tag set and may require to understand the context of the
entity mention. In this paper, we use entity linking to help with the
fine-grained entity type classification process. We propose a deep neural model
that makes predictions based on both the context and the information obtained
from entity linking results. Experimental results on two commonly used datasets
demonstrates the effectiveness of our approach. On both datasets, it achieves
more than 5\% absolute strict accuracy improvement over the state of the art.Comment: EMNLP 201
Learning to Correct Noisy Labels for Fine-Grained Entity Typing via Co-Prediction Prompt Tuning
Fine-grained entity typing (FET) is an essential task in natural language
processing that aims to assign semantic types to entities in text. However, FET
poses a major challenge known as the noise labeling problem, whereby current
methods rely on estimating noise distribution to identify noisy labels but are
confused by diverse noise distribution deviation. To address this limitation,
we introduce Co-Prediction Prompt Tuning for noise correction in FET, which
leverages multiple prediction results to identify and correct noisy labels.
Specifically, we integrate prediction results to recall labeled labels and
utilize a differentiated margin to identify inaccurate labels. Moreover, we
design an optimization objective concerning divergent co-predictions during
fine-tuning, ensuring that the model captures sufficient information and
maintains robustness in noise identification. Experimental results on three
widely-used FET datasets demonstrate that our noise correction approach
significantly enhances the quality of various types of training samples,
including those annotated using distant supervision, ChatGPT, and
crowdsourcing.Comment: Accepted by Findings of EMNLP 2023, 11 page