3,042 research outputs found
Visual Pivoting for (Unsupervised) Entity Alignment
This work studies the use of visual semantic representations to align
entities in heterogeneous knowledge graphs (KGs). Images are natural components
of many existing KGs. By combining visual knowledge with other auxiliary
information, we show that the proposed new approach, EVA, creates a holistic
entity representation that provides strong signals for cross-graph entity
alignment. Besides, previous entity alignment methods require human labelled
seed alignment, restricting availability. EVA provides a completely
unsupervised solution by leveraging the visual similarity of entities to create
an initial seed dictionary (visual pivots). Experiments on benchmark data sets
DBP15k and DWY15k show that EVA offers state-of-the-art performance on both
monolingual and cross-lingual entity alignment tasks. Furthermore, we discover
that images are particularly useful to align long-tail KG entities, which
inherently lack the structural contexts necessary for capturing the
correspondences.Comment: To appear at AAAI-202
MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid
As an important variant of entity alignment (EA), multi-modal entity
alignment (MMEA) aims to discover identical entities across different knowledge
graphs (KGs) with relevant images attached. We noticed that current MMEA
algorithms all globally adopt the KG-level modality fusion strategies for
multi-modal entity representation but ignore the variation in modality
preferences for individual entities, hurting the robustness to potential noise
involved in modalities (e.g., blurry images and relations). In this paper, we
present MEAformer, a multi-modal entity alignment transformer approach for meta
modality hybrid, which dynamically predicts the mutual correlation coefficients
among modalities for entity-level feature aggregation. A modal-aware hard
entity replay strategy is further proposed for addressing vague entity details.
Experimental results show that our model not only achieves SOTA performance on
multiple training scenarios including supervised, unsupervised, iterative, and
low resource, but also has a comparable number of parameters, optimistic speed,
and good interpretability. Our code and data are available at
https://github.com/zjukg/MEAformer.Comment: Repository: https://github.com/zjukg/MEAforme
AutoAlign: Fully Automatic and Effective Knowledge Graph Alignment enabled by Large Language Models
The task of entity alignment between knowledge graphs (KGs) aims to identify
every pair of entities from two different KGs that represent the same entity.
Many machine learning-based methods have been proposed for this task. However,
to our best knowledge, existing methods all require manually crafted seed
alignments, which are expensive to obtain. In this paper, we propose the first
fully automatic alignment method named AutoAlign, which does not require any
manually crafted seed alignments. Specifically, for predicate embeddings,
AutoAlign constructs a predicate-proximity-graph with the help of large
language models to automatically capture the similarity between predicates
across two KGs. For entity embeddings, AutoAlign first computes the entity
embeddings of each KG independently using TransE, and then shifts the two KGs'
entity embeddings into the same vector space by computing the similarity
between entities based on their attributes. Thus, both predicate alignment and
entity alignment can be done without manually crafted seed alignments.
AutoAlign is not only fully automatic, but also highly effective. Experiments
using real-world KGs show that AutoAlign improves the performance of entity
alignment significantly compared to state-of-the-art methods.Comment: 14 pages, 5 figures, 4 tables. arXiv admin note: substantial text
overlap with arXiv:2210.0854
- …