7 research outputs found
Graph Force Learning
Features representation leverages the great power in network analysis tasks. However, most features are discrete which poses tremendous challenges to effective use. Recently, increasing attention has been paid on network feature learning, which could map discrete features to continued space. Unfortunately, current studies fail to fully preserve the structural information in the feature space due to random negative sampling strategy during training. To tackle this problem, we study the problem of feature learning and novelty propose a force-based graph learning model named GForce inspired by the spring-electrical model. GForce assumes that nodes are in attractive forces and repulsive forces, thus leading to the same representation with the original structural information in feature learning. Comprehensive experiments on three benchmark datasets demonstrate the effectiveness of the proposed framework. Furthermore, GForce opens up opportunities to use physics models to model node interaction for graph learning. © 2020 IEEE
Graph Force Learning
Features representation leverages the great power in network analysis tasks.
However, most features are discrete which poses tremendous challenges to
effective use. Recently, increasing attention has been paid on network feature
learning, which could map discrete features to continued space. Unfortunately,
current studies fail to fully preserve the structural information in the
feature space due to random negative sampling strategy during training. To
tackle this problem, we study the problem of feature learning and novelty
propose a force-based graph learning model named GForce inspired by the
spring-electrical model. GForce assumes that nodes are in attractive forces and
repulsive forces, thus leading to the same representation with the original
structural information in feature learning. Comprehensive experiments on
benchmark datasets demonstrate the effectiveness of the proposed framework.
Furthermore, GForce opens up opportunities to use physics models to model node
interaction for graph learning
Contrastive Learning for Non-Local Graphs with Multi-Resolution Structural Views
Learning node-level representations of heterophilic graphs is crucial for
various applications, including fraudster detection and protein function
prediction. In such graphs, nodes share structural similarity identified by the
equivalence of their connectivity which is implicitly encoded in the form of
higher-order hierarchical information in the graphs. The contrastive methods
are popular choices for learning the representation of nodes in a graph.
However, existing contrastive methods struggle to capture higher-order graph
structures. To address this limitation, we propose a novel multiview
contrastive learning approach that integrates diffusion filters on graphs. By
incorporating multiple graph views as augmentations, our method captures the
structural equivalence in heterophilic graphs, enabling the discovery of hidden
relationships and similarities not apparent in traditional node
representations. Our approach outperforms baselines on synthetic and real
structural datasets, surpassing the best baseline by on Cornell,
on Texas, and on Wisconsin. Additionally, it consistently
achieves superior performance on proximal tasks, demonstrating its
effectiveness in uncovering structural information and improving downstream
applications
Simple and Effective Graph Autoencoders with One-Hop Linear Models
Over the last few years, graph autoencoders (AE) and variational autoencoders
(VAE) emerged as powerful node embedding methods, with promising performances
on challenging tasks such as link prediction and node clustering. Graph AE, VAE
and most of their extensions rely on multi-layer graph convolutional networks
(GCN) encoders to learn vector space representations of nodes. In this paper,
we show that GCN encoders are actually unnecessarily complex for many
applications. We propose to replace them by significantly simpler and more
interpretable linear models w.r.t. the direct neighborhood (one-hop) adjacency
matrix of the graph, involving fewer operations, fewer parameters and no
activation function. For the two aforementioned tasks, we show that this
simpler approach consistently reaches competitive performances w.r.t. GCN-based
graph AE and VAE for numerous real-world graphs, including all benchmark
datasets commonly used to evaluate graph AE and VAE. Based on these results, we
also question the relevance of repeatedly using these datasets to compare
complex graph AE and VAE.Comment: Accepted at ECML-PKDD 2020. A preliminary version of this work has
previously been presented at the NeurIPS 2019 workshop on Graph
Representation Learning: arXiv:1910.0094
Effective Decoding in Graph Auto-Encoder Using Triadic Closure
The (variational) graph auto-encoder and its variants have been popularly used for representation learning on graph-structured data. While the encoder is often a powerful graph convolutional network, the decoder reconstructs the graph structure by only considering two nodes at a time, thus ignoring possible interactions among edges. On the other hand, structured prediction, which considers the whole graph simultaneously, is computationally expensive. In this paper, we utilize the well-known triadic closure property which is exhibited in many real-world networks. We propose the triad decoder, which considers and predicts the three edges involved in a local triad together. The triad decoder can be readily used in any graph-based auto-encoder. In particular, we incorporate this to the (variational) graph auto-encoder. Experiments on link prediction, node clustering and graph generation show that the use of triads leads to more accurate prediction, clustering and better preservation of the graph characteristics