26 research outputs found
On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods
In this work, we take a closer look at the evaluation of two families of
methods for enriching information from knowledge graphs: Link Prediction and
Entity Alignment. In the current experimental setting, multiple different
scores are employed to assess different aspects of model performance. We
analyze the informativeness of these evaluation measures and identify several
shortcomings. In particular, we demonstrate that all existing scores can hardly
be used to compare results across different datasets. Moreover, we demonstrate
that varying size of the test size automatically has impact on the performance
of the same model based on commonly used metrics for the Entity Alignment task.
We show that this leads to various problems in the interpretation of results,
which may support misleading conclusions. Therefore, we propose adjustments to
the evaluation and demonstrate empirically how this supports a fair,
comparable, and interpretable assessment of model performance. Our code is
available at https://github.com/mberr/rank-based-evaluation
Coordinated Reasoning for Cross-Lingual Knowledge Graph Alignment
Existing entity alignment methods mainly vary on the choices of encoding the
knowledge graph, but they typically use the same decoding method, which
independently chooses the local optimal match for each source entity. This
decoding method may not only cause the "many-to-one" problem but also neglect
the coordinated nature of this task, that is, each alignment decision may
highly correlate to the other decisions. In this paper, we introduce two
coordinated reasoning methods, i.e., the Easy-to-Hard decoding strategy and
joint entity alignment algorithm. Specifically, the Easy-to-Hard strategy first
retrieves the model-confident alignments from the predicted results and then
incorporates them as additional knowledge to resolve the remaining
model-uncertain alignments. To achieve this, we further propose an enhanced
alignment model that is built on the current state-of-the-art baseline. In
addition, to address the many-to-one problem, we propose to jointly predict
entity alignments so that the one-to-one constraint can be naturally
incorporated into the alignment prediction. Experimental results show that our
model achieves the state-of-the-art performance and our reasoning methods can
also significantly improve existing baselines.Comment: in AAAI 202
Knowledge Graph Alignment Network with Gated Multi-hop Neighborhood Aggregation
Graph neural networks (GNNs) have emerged as a powerful paradigm for
embedding-based entity alignment due to their capability of identifying
isomorphic subgraphs. However, in real knowledge graphs (KGs), the counterpart
entities usually have non-isomorphic neighborhood structures, which easily
causes GNNs to yield different representations for them. To tackle this
problem, we propose a new KG alignment network, namely AliNet, aiming at
mitigating the non-isomorphism of neighborhood structures in an end-to-end
manner. As the direct neighbors of counterpart entities are usually dissimilar
due to the schema heterogeneity, AliNet introduces distant neighbors to expand
the overlap between their neighborhood structures. It employs an attention
mechanism to highlight helpful distant neighbors and reduce noises. Then, it
controls the aggregation of both direct and distant neighborhood information
using a gating mechanism. We further propose a relation loss to refine entity
representations. We perform thorough experiments with detailed ablation studies
and analyses on five entity alignment datasets, demonstrating the effectiveness
of AliNet.Comment: Accepted by the 34th AAAI Conference on Artificial Intelligence (AAAI
2020