5 research outputs found
MMKG: Multi-Modal Knowledge Graphs
We present MMKG, a collection of three knowledge graphs that contain both
numerical features and (links to) images for all entities as well as entity
alignments between pairs of KGs. Therefore, multi-relational link prediction
and entity matching communities can benefit from this resource. We believe this
data set has the potential to facilitate the development of novel multi-modal
learning approaches for knowledge graphs.We validate the utility ofMMKG in the
sameAs link prediction task with an extensive set of experiments. These
experiments show that the task at hand benefits from learning of multiple
feature types.Comment: ESWC 201
Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications
Representing entities and relations in an embedding space is a well-studied
approach for machine learning on relational data. Existing approaches, however,
primarily focus on improving accuracy and overlook other aspects such as
robustness and interpretability. In this paper, we propose adversarial
modifications for link prediction models: identifying the fact to add into or
remove from the knowledge graph that changes the prediction for a target fact
after the model is retrained. Using these single modifications of the graph, we
identify the most influential fact for a predicted link and evaluate the
sensitivity of the model to the addition of fake facts. We introduce an
efficient approach to estimate the effect of such modifications by
approximating the change in the embeddings when the knowledge graph changes. To
avoid the combinatorial search over all possible facts, we train a network to
decode embeddings to their corresponding graph components, allowing the use of
gradient-based optimization to identify the adversarial modification. We use
these techniques to evaluate the robustness of link prediction models (by
measuring sensitivity to additional facts), study interpretability through the
facts most responsible for predictions (by identifying the most influential
neighbors), and detect incorrect facts in the knowledge base.Comment: Published at NAACL 201
Graph Neural Processes: Towards Bayesian Graph Neural Networks
We introduce Graph Neural Processes (GNP), inspired by the recent work in
conditional and latent neural processes. A Graph Neural Process is defined as a
Conditional Neural Process that operates on arbitrary graph data. It takes
features of sparsely observed context points as input, and outputs a
distribution over target points. We demonstrate graph neural processes in edge
imputation and discuss benefits and drawbacks of the method for other
application areas. One major benefit of GNPs is the ability to quantify
uncertainty in deep learning on graph structures. An additional benefit of this
method is the ability to extend graph neural networks to inputs of dynamic
sized graphs
Relational inductive biases, deep learning, and graph networks
Artificial intelligence (AI) has undergone a renaissance recently, making
major progress in key domains such as vision, language, control, and
decision-making. This has been due, in part, to cheap data and cheap compute
resources, which have fit the natural strengths of deep learning. However, many
defining characteristics of human intelligence, which developed under much
different pressures, remain out of reach for current approaches. In particular,
generalizing beyond one's experiences--a hallmark of human intelligence from
infancy--remains a formidable challenge for modern AI.
The following is part position paper, part review, and part unification. We
argue that combinatorial generalization must be a top priority for AI to
achieve human-like abilities, and that structured representations and
computations are key to realizing this objective. Just as biology uses nature
and nurture cooperatively, we reject the false choice between
"hand-engineering" and "end-to-end" learning, and instead advocate for an
approach which benefits from their complementary strengths. We explore how
using relational inductive biases within deep learning architectures can
facilitate learning about entities, relations, and rules for composing them. We
present a new building block for the AI toolkit with a strong relational
inductive bias--the graph network--which generalizes and extends various
approaches for neural networks that operate on graphs, and provides a
straightforward interface for manipulating structured knowledge and producing
structured behaviors. We discuss how graph networks can support relational
reasoning and combinatorial generalization, laying the foundation for more
sophisticated, interpretable, and flexible patterns of reasoning. As a
companion to this paper, we have released an open-source software library for
building graph networks, with demonstrations of how to use them in practice
Computing Graph Neural Networks: A Survey from Algorithms to Accelerators
Graph Neural Networks (GNNs) have exploded onto the machine learning scene in
recent years owing to their capability to model and learn from graph-structured
data. Such an ability has strong implications in a wide variety of fields whose
data is inherently relational, for which conventional neural networks do not
perform well. Indeed, as recent reviews can attest, research in the area of
GNNs has grown rapidly and has lead to the development of a variety of GNN
algorithm variants as well as to the exploration of groundbreaking applications
in chemistry, neurology, electronics, or communication networks, among others.
At the current stage of research, however, the efficient processing of GNNs is
still an open challenge for several reasons. Besides of their novelty, GNNs are
hard to compute due to their dependence on the input graph, their combination
of dense and very sparse operations, or the need to scale to huge graphs in
some applications. In this context, this paper aims to make two main
contributions. On the one hand, a review of the field of GNNs is presented from
the perspective of computing. This includes a brief tutorial on the GNN
fundamentals, an overview of the evolution of the field in the last decade, and
a summary of operations carried out in the multiple phases of different GNN
algorithm variants. On the other hand, an in-depth analysis of current software
and hardware acceleration schemes is provided, from which a hardware-software,
graph-aware, and communication-centric vision for GNN accelerators is
distilled.Comment: 35 pages, 9 figures, 8 tables, 188 reference