3,090 research outputs found
Composition-based Multi-Relational Graph Convolutional Networks
Graph Convolutional Networks (GCNs) have recently been shown to be quite
successful in modeling graph-structured data. However, the primary focus has
been on handling simple undirected graphs. Multi-relational graphs are a more
general and prevalent form of graphs where each edge has a label and direction
associated with it. Most of the existing approaches to handle such graphs
suffer from over-parameterization and are restricted to learning
representations of nodes only. In this paper, we propose CompGCN, a novel Graph
Convolutional framework which jointly embeds both nodes and relations in a
relational graph. CompGCN leverages a variety of entity-relation composition
operations from Knowledge Graph Embedding techniques and scales with the number
of relations. It also generalizes several of the existing multi-relational GCN
methods. We evaluate our proposed method on multiple tasks such as node
classification, link prediction, and graph classification, and achieve
demonstrably superior results. We make the source code of CompGCN available to
foster reproducible research.Comment: In Proceedings of ICLR 202
Answering Visual-Relational Queries in Web-Extracted Knowledge Graphs
A visual-relational knowledge graph (KG) is a multi-relational graph whose
entities are associated with images. We explore novel machine learning
approaches for answering visual-relational queries in web-extracted knowledge
graphs. To this end, we have created ImageGraph, a KG with 1,330 relation
types, 14,870 entities, and 829,931 images crawled from the web. With
visual-relational KGs such as ImageGraph one can introduce novel probabilistic
query types in which images are treated as first-class citizens. Both the
prediction of relations between unseen images as well as multi-relational image
retrieval can be expressed with specific families of visual-relational queries.
We introduce novel combinations of convolutional networks and knowledge graph
embedding methods to answer such queries. We also explore a zero-shot learning
scenario where an image of an entirely new entity is linked with multiple
relations to entities of an existing KG. The resulting multi-relational
grounding of unseen entity images into a knowledge graph serves as a semantic
entity representation. We conduct experiments to demonstrate that the proposed
methods can answer these visual-relational queries efficiently and accurately
Quaternion Knowledge Graph Embeddings
In this work, we move beyond the traditional complex-valued representations,
introducing more expressive hypercomplex representations to model entities and
relations for knowledge graph embeddings. More specifically, quaternion
embeddings, hypercomplex-valued embeddings with three imaginary components, are
utilized to represent entities. Relations are modelled as rotations in the
quaternion space. The advantages of the proposed approach are: (1) Latent
inter-dependencies (between all components) are aptly captured with Hamilton
product, encouraging a more compact interaction between entities and relations;
(2) Quaternions enable expressive rotation in four-dimensional space and have
more degree of freedom than rotation in complex plane; (3) The proposed
framework is a generalization of ComplEx on hypercomplex space while offering
better geometrical interpretations, concurrently satisfying the key desiderata
of relational representation learning (i.e., modeling symmetry, anti-symmetry
and inversion). Experimental results demonstrate that our method achieves
state-of-the-art performance on four well-established knowledge graph
completion benchmarks.Comment: Accepted by NeurIPS 201
Relational inductive biases, deep learning, and graph networks
Artificial intelligence (AI) has undergone a renaissance recently, making
major progress in key domains such as vision, language, control, and
decision-making. This has been due, in part, to cheap data and cheap compute
resources, which have fit the natural strengths of deep learning. However, many
defining characteristics of human intelligence, which developed under much
different pressures, remain out of reach for current approaches. In particular,
generalizing beyond one's experiences--a hallmark of human intelligence from
infancy--remains a formidable challenge for modern AI.
The following is part position paper, part review, and part unification. We
argue that combinatorial generalization must be a top priority for AI to
achieve human-like abilities, and that structured representations and
computations are key to realizing this objective. Just as biology uses nature
and nurture cooperatively, we reject the false choice between
"hand-engineering" and "end-to-end" learning, and instead advocate for an
approach which benefits from their complementary strengths. We explore how
using relational inductive biases within deep learning architectures can
facilitate learning about entities, relations, and rules for composing them. We
present a new building block for the AI toolkit with a strong relational
inductive bias--the graph network--which generalizes and extends various
approaches for neural networks that operate on graphs, and provides a
straightforward interface for manipulating structured knowledge and producing
structured behaviors. We discuss how graph networks can support relational
reasoning and combinatorial generalization, laying the foundation for more
sophisticated, interpretable, and flexible patterns of reasoning. As a
companion to this paper, we have released an open-source software library for
building graph networks, with demonstrations of how to use them in practice
Language-Conditioned Graph Networks for Relational Reasoning
Solving grounded language tasks often requires reasoning about relationships
between objects in the context of a given task. For example, to answer the
question "What color is the mug on the plate?" we must check the color of the
specific mug that satisfies the "on" relationship with respect to the plate.
Recent work has proposed various methods capable of complex relational
reasoning. However, most of their power is in the inference structure, while
the scene is represented with simple local appearance features. In this paper,
we take an alternate approach and build contextualized representations for
objects in a visual scene to support relational reasoning. We propose a general
framework of Language-Conditioned Graph Networks (LCGN), where each node
represents an object, and is described by a context-aware representation from
related objects through iterative message passing conditioned on the textual
input. E.g., conditioning on the "on" relationship to the plate, the object
"mug" gathers messages from the object "plate" to update its representation to
"mug on the plate", which can be easily consumed by a simple classifier for
answer prediction. We experimentally show that our LCGN approach effectively
supports relational reasoning and improves performance across several tasks and
datasets. Our code is available at http://ronghanghu.com/lcgn
Context-Aware Visual Compatibility Prediction
How do we determine whether two or more clothing items are compatible or
visually appealing? Part of the answer lies in understanding of visual
aesthetics, and is biased by personal preferences shaped by social attitudes,
time, and place. In this work we propose a method that predicts compatibility
between two items based on their visual features, as well as their context. We
define context as the products that are known to be compatible with each of
these item. Our model is in contrast to other metric learning approaches that
rely on pairwise comparisons between item features alone. We address the
compatibility prediction problem using a graph neural network that learns to
generate product embeddings conditioned on their context. We present results
for two prediction tasks (fill in the blank and outfit compatibility) tested on
two fashion datasets Polyvore and Fashion-Gen, and on a subset of the Amazon
dataset; we achieve state of the art results when using context information and
show how test performance improves as more context is used
Graph Neural Pre-training for Enhancing Recommendations using Side Information
Leveraging the side information associated with entities (i.e. users and
items) to enhance the performance of recommendation systems has been widely
recognized as an important modelling dimension. While many existing approaches
focus on the integration scheme to incorporate entity side information -- by
combining the recommendation loss function with an extra side information-aware
loss -- in this paper, we propose instead a novel pre-training scheme for
leveraging the side information. In particular, we first pre-train a
representation model using the side information of the entities, and then
fine-tune it using an existing general representation-based recommendation
model. Specifically, we propose two pre-training models, named GCN-P and COM-P,
by considering the entities and their relations constructed from side
information as two different types of graphs respectively, to pre-train entity
embeddings. For the GCN-P model, two single-relational graphs are constructed
from all the users' and items' side information respectively, to pre-train
entity representations by using the Graph Convolutional Networks. For the COM-P
model, two multi-relational graphs are constructed to pre-train the entity
representations by using the Composition-based Graph Convolutional Networks. An
extensive evaluation of our pre-training models fine-tuned under four general
representation-based recommender models, i.e. MF, NCF, NGCF and LightGCN, shows
that effectively pre-training embeddings with both the user's and item's side
information can significantly improve these original models in terms of both
effectiveness and stability
A survey of embedding models of entities and relationships for knowledge graph completion
Knowledge graphs (KGs) of real-world facts about entities and their
relationships are useful resources for a variety of natural language processing
tasks. However, because knowledge graphs are typically incomplete, it is useful
to perform knowledge graph completion or link prediction, i.e. predict whether
a relationship not in the knowledge graph is likely to be true. This paper
serves as a comprehensive survey of embedding models of entities and
relationships for knowledge graph completion, summarizing up-to-date
experimental results on standard benchmark datasets and pointing out potential
future research directions.Comment: 13 pages, 2 figures and 6 table
Explainable Link Prediction for Emerging Entities in Knowledge Graphs
Despite their large-scale coverage, cross-domain knowledge graphs invariably
suffer from inherent incompleteness and sparsity. Link prediction can alleviate
this by inferring a target entity, given a source entity and a query relation.
Recent embedding-based approaches operate in an uninterpretable latent semantic
vector space of entities and relations, while path-based approaches operate in
the symbolic space, making the inference process explainable. However, these
approaches typically consider static snapshots of the knowledge graphs,
severely restricting their applicability for evolving knowledge graphs with
newly emerging entities. To overcome this issue, we propose an inductive
representation learning framework that is able to learn representations of
previously unseen entities. Our method finds reasoning paths between source and
target entities, thereby making the link prediction for unseen entities
interpretable and providing support evidence for the inferred link.Comment: To appear in the proceedings of International Semantic Web
Conference, 2020 (ISWC 2020
Knowledge Graph Embeddings and Explainable AI
Knowledge graph embeddings are now a widely adopted approach to knowledge
representation in which entities and relationships are embedded in vector
spaces. In this chapter, we introduce the reader to the concept of knowledge
graph embeddings by explaining what they are, how they can be generated and how
they can be evaluated. We summarize the state-of-the-art in this field by
describing the approaches that have been introduced to represent knowledge in
the vector space. In relation to knowledge representation, we consider the
problem of explainability, and discuss models and methods for explaining
predictions obtained via knowledge graph embeddings.Comment: Federico Bianchi, Gaetano Rossiello, Luca Costabello, Matteo
Plamonari, Pasquale Minervini, Knowledge Graph Embeddings and Explainable AI.
In: Ilaria Tiddi, Freddy Lecue, Pascal Hitzler (eds.), Knowledge Graphs for
eXplainable AI -- Foundations, Applications and Challenges. Studies on the
Semantic Web, IOS Press, Amsterdam, 202
- …