8,713 research outputs found
Gravity-Inspired Graph Autoencoders for Directed Link Prediction
Graph autoencoders (AE) and variational autoencoders (VAE) recently emerged
as powerful node embedding methods. In particular, graph AE and VAE were
successfully leveraged to tackle the challenging link prediction problem,
aiming at figuring out whether some pairs of nodes from a graph are connected
by unobserved edges. However, these models focus on undirected graphs and
therefore ignore the potential direction of the link, which is limiting for
numerous real-life applications. In this paper, we extend the graph AE and VAE
frameworks to address link prediction in directed graphs. We present a new
gravity-inspired decoder scheme that can effectively reconstruct directed
graphs from a node embedding. We empirically evaluate our method on three
different directed link prediction tasks, for which standard graph AE and VAE
perform poorly. We achieve competitive results on three real-world graphs,
outperforming several popular baselines.Comment: ACM International Conference on Information and Knowledge Management
(CIKM 2019
Foundations and modelling of dynamic networks using Dynamic Graph Neural Networks: A survey
Dynamic networks are used in a wide range of fields, including social network
analysis, recommender systems, and epidemiology. Representing complex networks
as structures changing over time allow network models to leverage not only
structural but also temporal patterns. However, as dynamic network literature
stems from diverse fields and makes use of inconsistent terminology, it is
challenging to navigate. Meanwhile, graph neural networks (GNNs) have gained a
lot of attention in recent years for their ability to perform well on a range
of network science tasks, such as link prediction and node classification.
Despite the popularity of graph neural networks and the proven benefits of
dynamic network models, there has been little focus on graph neural networks
for dynamic networks. To address the challenges resulting from the fact that
this research crosses diverse fields as well as to survey dynamic graph neural
networks, this work is split into two main parts. First, to address the
ambiguity of the dynamic network terminology we establish a foundation of
dynamic networks with consistent, detailed terminology and notation. Second, we
present a comprehensive survey of dynamic graph neural network models using the
proposed terminologyComment: 28 pages, 9 figures, 8 table
Resource Constrained Structured Prediction
We study the problem of structured prediction under test-time budget
constraints. We propose a novel approach applicable to a wide range of
structured prediction problems in computer vision and natural language
processing. Our approach seeks to adaptively generate computationally costly
features during test-time in order to reduce the computational cost of
prediction while maintaining prediction performance. We show that training the
adaptive feature generation system can be reduced to a series of structured
learning problems, resulting in efficient training using existing structured
learning algorithms. This framework provides theoretical justification for
several existing heuristic approaches found in literature. We evaluate our
proposed adaptive system on two structured prediction tasks, optical character
recognition (OCR) and dependency parsing and show strong performance in
reduction of the feature costs without degrading accuracy
- …