11 research outputs found
Cross-lingual Entity Alignment via Joint Attribute-Preserving Embedding
Entity alignment is the task of finding entities in two knowledge bases (KBs)
that represent the same real-world object. When facing KBs in different natural
languages, conventional cross-lingual entity alignment methods rely on machine
translation to eliminate the language barriers. These approaches often suffer
from the uneven quality of translations between languages. While recent
embedding-based techniques encode entities and relationships in KBs and do not
need machine translation for cross-lingual entity alignment, a significant
number of attributes remain largely unexplored. In this paper, we propose a
joint attribute-preserving embedding model for cross-lingual entity alignment.
It jointly embeds the structures of two KBs into a unified vector space and
further refines it by leveraging attribute correlations in the KBs. Our
experimental results on real-world datasets show that this approach
significantly outperforms the state-of-the-art embedding approaches for
cross-lingual entity alignment and could be complemented with methods based on
machine translation
Is Aligning Embedding Spaces a Challenging Task? A Study on Heterogeneous Embedding Alignment Methods
Representation Learning of words and Knowledge Graphs (KG) into low
dimensional vector spaces along with its applications to many real-world
scenarios have recently gained momentum. In order to make use of multiple KG
embeddings for knowledge-driven applications such as question answering, named
entity disambiguation, knowledge graph completion, etc., alignment of different
KG embedding spaces is necessary. In addition to multilinguality and
domain-specific information, different KGs pose the problem of structural
differences making the alignment of the KG embeddings more challenging. This
paper provides a theoretical analysis and comparison of the state-of-the-art
alignment methods between two embedding spaces representing entity-entity and
entity-word. This paper also aims at assessing the capability and short-comings
of the existing alignment methods on the pretext of different applications
Coordinated Reasoning for Cross-Lingual Knowledge Graph Alignment
Existing entity alignment methods mainly vary on the choices of encoding the
knowledge graph, but they typically use the same decoding method, which
independently chooses the local optimal match for each source entity. This
decoding method may not only cause the "many-to-one" problem but also neglect
the coordinated nature of this task, that is, each alignment decision may
highly correlate to the other decisions. In this paper, we introduce two
coordinated reasoning methods, i.e., the Easy-to-Hard decoding strategy and
joint entity alignment algorithm. Specifically, the Easy-to-Hard strategy first
retrieves the model-confident alignments from the predicted results and then
incorporates them as additional knowledge to resolve the remaining
model-uncertain alignments. To achieve this, we further propose an enhanced
alignment model that is built on the current state-of-the-art baseline. In
addition, to address the many-to-one problem, we propose to jointly predict
entity alignments so that the one-to-one constraint can be naturally
incorporated into the alignment prediction. Experimental results show that our
model achieves the state-of-the-art performance and our reasoning methods can
also significantly improve existing baselines.Comment: in AAAI 202
Wider Vision: Enriching Convolutional Neural Networks via Alignment to External Knowledge Bases
Deep learning models suffer from opaqueness. For Convolutional Neural
Networks (CNNs), current research strategies for explaining models focus on the
target classes within the associated training dataset. As a result, the
understanding of hidden feature map activations is limited by the
discriminative knowledge gleaned during training. The aim of our work is to
explain and expand CNNs models via the mirroring or alignment of CNN to an
external knowledge base. This will allow us to give a semantic context or label
for each visual feature. We can match CNN feature activations to nodes in our
external knowledge base. This supports knowledge-based interpretation of the
features associated with model decisions. To demonstrate our approach, we build
two separate graphs. We use an entity alignment method to align the feature
nodes in a CNN with the nodes in a ConceptNet based knowledge graph. We then
measure the proximity of CNN graph nodes to semantically meaningful knowledge
base nodes. Our results show that in the aligned embedding space, nodes from
the knowledge graph are close to the CNN feature nodes that have similar
meanings, indicating that nodes from an external knowledge base can act as
explanatory semantic references for features in the model. We analyse a variety
of graph building methods in order to improve the results from our embedding
space. We further demonstrate that by using hierarchical relationships from our
external knowledge base, we can locate new unseen classes outside the CNN
training set in our embeddings space, based on visual feature activations. This
suggests that we can adapt our approach to identify unseen classes based on CNN
feature activations. Our demonstrated approach of aligning a CNN with an
external knowledge base paves the way to reason about and beyond the trained
model, with future adaptations to explainable models and zero-shot learning