1,845 research outputs found
Interactive Contrastive Learning for Self-supervised Entity Alignment
Self-supervised entity alignment (EA) aims to link equivalent entities across
different knowledge graphs (KGs) without seed alignments. The current SOTA
self-supervised EA method draws inspiration from contrastive learning,
originally designed in computer vision based on instance discrimination and
contrastive loss, and suffers from two shortcomings. Firstly, it puts
unidirectional emphasis on pushing sampled negative entities far away rather
than pulling positively aligned pairs close, as is done in the well-established
supervised EA. Secondly, KGs contain rich side information (e.g., entity
description), and how to effectively leverage those information has not been
adequately investigated in self-supervised EA. In this paper, we propose an
interactive contrastive learning model for self-supervised EA. The model
encodes not only structures and semantics of entities (including entity name,
entity description, and entity neighborhood), but also conducts cross-KG
contrastive learning by building pseudo-aligned entity pairs. Experimental
results show that our approach outperforms previous best self-supervised
results by a large margin (over 9% average improvement) and performs on par
with previous SOTA supervised counterparts, demonstrating the effectiveness of
the interactive contrastive learning for self-supervised EA.Comment: Accepted by CIKM 202
Visual Pivoting for (Unsupervised) Entity Alignment
This work studies the use of visual semantic representations to align
entities in heterogeneous knowledge graphs (KGs). Images are natural components
of many existing KGs. By combining visual knowledge with other auxiliary
information, we show that the proposed new approach, EVA, creates a holistic
entity representation that provides strong signals for cross-graph entity
alignment. Besides, previous entity alignment methods require human labelled
seed alignment, restricting availability. EVA provides a completely
unsupervised solution by leveraging the visual similarity of entities to create
an initial seed dictionary (visual pivots). Experiments on benchmark data sets
DBP15k and DWY15k show that EVA offers state-of-the-art performance on both
monolingual and cross-lingual entity alignment tasks. Furthermore, we discover
that images are particularly useful to align long-tail KG entities, which
inherently lack the structural contexts necessary for capturing the
correspondences.Comment: To appear at AAAI-202
- …