13,849 research outputs found
Embedding Graphs under Centrality Constraints for Network Visualization
Visual rendering of graphs is a key task in the mapping of complex network
data. Although most graph drawing algorithms emphasize aesthetic appeal,
certain applications such as travel-time maps place more importance on
visualization of structural network properties. The present paper advocates two
graph embedding approaches with centrality considerations to comply with node
hierarchy. The problem is formulated first as one of constrained
multi-dimensional scaling (MDS), and it is solved via block coordinate descent
iterations with successive approximations and guaranteed convergence to a KKT
point. In addition, a regularization term enforcing graph smoothness is
incorporated with the goal of reducing edge crossings. A second approach
leverages the locally-linear embedding (LLE) algorithm which assumes that the
graph encodes data sampled from a low-dimensional manifold. Closed-form
solutions to the resulting centrality-constrained optimization problems are
determined yielding meaningful embeddings. Experimental results demonstrate the
efficacy of both approaches, especially for visualizing large networks on the
order of thousands of nodes.Comment: Submitted to IEEE Transactions on Visualization and Computer Graphic
Estimating Node Importance in Knowledge Graphs Using Graph Neural Networks
How can we estimate the importance of nodes in a knowledge graph (KG)? A KG
is a multi-relational graph that has proven valuable for many tasks including
question answering and semantic search. In this paper, we present GENI, a
method for tackling the problem of estimating node importance in KGs, which
enables several downstream applications such as item recommendation and
resource allocation. While a number of approaches have been developed to
address this problem for general graphs, they do not fully utilize information
available in KGs, or lack flexibility needed to model complex relationship
between entities and their importance. To address these limitations, we explore
supervised machine learning algorithms. In particular, building upon recent
advancement of graph neural networks (GNNs), we develop GENI, a GNN-based
method designed to deal with distinctive challenges involved with predicting
node importance in KGs. Our method performs an aggregation of importance scores
instead of aggregating node embeddings via predicate-aware attention mechanism
and flexible centrality adjustment. In our evaluation of GENI and existing
methods on predicting node importance in real-world KGs with different
characteristics, GENI achieves 5-17% higher NDCG@100 than the state of the art.Comment: KDD 2019 Research Track. 11 pages. Changelog: Type 3 font removed,
and minor updates made in the Appendix (v2
Node Embedding over Temporal Graphs
In this work, we present a method for node embedding in temporal graphs. We
propose an algorithm that learns the evolution of a temporal graph's nodes and
edges over time and incorporates this dynamics in a temporal node embedding
framework for different graph prediction tasks. We present a joint loss
function that creates a temporal embedding of a node by learning to combine its
historical temporal embeddings, such that it optimizes per given task (e.g.,
link prediction). The algorithm is initialized using static node embeddings,
which are then aligned over the representations of a node at different time
points, and eventually adapted for the given task in a joint optimization. We
evaluate the effectiveness of our approach over a variety of temporal graphs
for the two fundamental tasks of temporal link prediction and multi-label node
classification, comparing to competitive baselines and algorithmic
alternatives. Our algorithm shows performance improvements across many of the
datasets and baselines and is found particularly effective for graphs that are
less cohesive, with a lower clustering coefficient
Learning Edge Representations via Low-Rank Asymmetric Projections
We propose a new method for embedding graphs while preserving directed edge
information. Learning such continuous-space vector representations (or
embeddings) of nodes in a graph is an important first step for using network
information (from social networks, user-item graphs, knowledge bases, etc.) in
many machine learning tasks.
Unlike previous work, we (1) explicitly model an edge as a function of node
embeddings, and we (2) propose a novel objective, the "graph likelihood", which
contrasts information from sampled random walks with non-existent edges.
Individually, both of these contributions improve the learned representations,
especially when there are memory constraints on the total size of the
embeddings. When combined, our contributions enable us to significantly improve
the state-of-the-art by learning more concise representations that better
preserve the graph structure.
We evaluate our method on a variety of link-prediction task including social
networks, collaboration networks, and protein interactions, showing that our
proposed method learn representations with error reductions of up to 76% and
55%, on directed and undirected graphs. In addition, we show that the
representations learned by our method are quite space efficient, producing
embeddings which have higher structure-preserving accuracy but are 10 times
smaller
Gravity-Inspired Graph Autoencoders for Directed Link Prediction
Graph autoencoders (AE) and variational autoencoders (VAE) recently emerged
as powerful node embedding methods. In particular, graph AE and VAE were
successfully leveraged to tackle the challenging link prediction problem,
aiming at figuring out whether some pairs of nodes from a graph are connected
by unobserved edges. However, these models focus on undirected graphs and
therefore ignore the potential direction of the link, which is limiting for
numerous real-life applications. In this paper, we extend the graph AE and VAE
frameworks to address link prediction in directed graphs. We present a new
gravity-inspired decoder scheme that can effectively reconstruct directed
graphs from a node embedding. We empirically evaluate our method on three
different directed link prediction tasks, for which standard graph AE and VAE
perform poorly. We achieve competitive results on three real-world graphs,
outperforming several popular baselines.Comment: ACM International Conference on Information and Knowledge Management
(CIKM 2019
- …