3,401 research outputs found
PF-GNN: Differentiable particle filtering based approximation of universal graph representations
Message passing Graph Neural Networks (GNNs) are known to be limited in
expressive power by the 1-WL color-refinement test for graph isomorphism. Other
more expressive models either are computationally expensive or need
preprocessing to extract structural features from the graph. In this work, we
propose to make GNNs universal by guiding the learning process with exact
isomorphism solver techniques which operate on the paradigm of
Individualization and Refinement (IR), a method to artificially introduce
asymmetry and further refine the coloring when 1-WL stops. Isomorphism solvers
generate a search tree of colorings whose leaves uniquely identify the graph.
However, the tree grows exponentially large and needs hand-crafted pruning
techniques which are not desirable from a learning perspective. We take a
probabilistic view and approximate the search tree of colorings (i.e.
embeddings) by sampling multiple paths from root to leaves of the search tree.
To learn more discriminative representations, we guide the sampling process
with particle filter updates, a principled approach for sequential state
estimation. Our algorithm is end-to-end differentiable, can be applied with any
GNN as backbone and learns richer graph representations with only linear
increase in runtime. Experimental evaluation shows that our approach
consistently outperforms leading GNN models on both synthetic benchmarks for
isomorphism detection as well as real-world datasets.Comment: Published as a conference paper at ICLR 202
Weisfeiler and Leman go Hyperbolic: Learning Distance Preserving Node Representations
In recent years, graph neural networks (GNNs) have emerged as a promising
tool for solving machine learning problems on graphs. Most GNNs are members of
the family of message passing neural networks (MPNNs). There is a close
connection between these models and the Weisfeiler-Leman (WL) test of
isomorphism, an algorithm that can successfully test isomorphism for a broad
class of graphs. Recently, much research has focused on measuring the
expressive power of GNNs. For instance, it has been shown that standard MPNNs
are at most as powerful as WL in terms of distinguishing non-isomorphic graphs.
However, these studies have largely ignored the distances between the
representations of nodes/graphs which are of paramount importance for learning
tasks. In this paper, we define a distance function between nodes which is
based on the hierarchy produced by the WL algorithm, and propose a model that
learns representations which preserve those distances between nodes. Since the
emerging hierarchy corresponds to a tree, to learn these representations, we
capitalize on recent advances in the field of hyperbolic neural networks. We
empirically evaluate the proposed model on standard node and graph
classification datasets where it achieves competitive performance with
state-of-the-art models
Projected Power Iteration for Network Alignment
The network alignment problem asks for the best correspondence between two
given graphs, so that the largest possible number of edges are matched. This
problem appears in many scientific problems (like the study of protein-protein
interactions) and it is very closely related to the quadratic assignment
problem which has graph isomorphism, traveling salesman and minimum bisection
problems as particular cases. The graph matching problem is NP-hard in general.
However, under some restrictive models for the graphs, algorithms can
approximate the alignment efficiently. In that spirit the recent work by Feizi
and collaborators introduce EigenAlign, a fast spectral method with convergence
guarantees for Erd\H{o}s-Reny\'i graphs. In this work we propose the algorithm
Projected Power Alignment, which is a projected power iteration version of
EigenAlign. We numerically show it improves the recovery rates of EigenAlign
and we describe the theory that may be used to provide performance guarantees
for Projected Power Alignment.Comment: 8 page
How Expressive are Graph Neural Networks in Recommendation?
Graph Neural Networks (GNNs) have demonstrated superior performance on
various graph learning tasks, including recommendation, where they leverage
user-item collaborative filtering signals in graphs. However, theoretical
formulations of their capability are scarce, despite their empirical
effectiveness in state-of-the-art recommender models. Recently, research has
explored the expressiveness of GNNs in general, demonstrating that message
passing GNNs are at most as powerful as the Weisfeiler-Lehman test, and that
GNNs combined with random node initialization are universal. Nevertheless, the
concept of "expressiveness" for GNNs remains vaguely defined. Most existing
works adopt the graph isomorphism test as the metric of expressiveness, but
this graph-level task may not effectively assess a model's ability in
recommendation, where the objective is to distinguish nodes of different
closeness. In this paper, we provide a comprehensive theoretical analysis of
the expressiveness of GNNs in recommendation, considering three levels of
expressiveness metrics: graph isomorphism (graph-level), node automorphism
(node-level), and topological closeness (link-level). We propose the
topological closeness metric to evaluate GNNs' ability to capture the
structural distance between nodes, which aligns closely with the objective of
recommendation. To validate the effectiveness of this new metric in evaluating
recommendation performance, we introduce a learning-less GNN algorithm that is
optimal on the new metric and can be optimal on the node-level metric with
suitable modification. We conduct extensive experiments comparing the proposed
algorithm against various types of state-of-the-art GNN models to explore the
explainability of the new metric in the recommendation task. For
reproducibility, implementation codes are available at
https://github.com/HKUDS/GTE.Comment: 32nd ACM International Conference on Information and Knowledge
Management (CIKM) 202
Graph- versus Vector-Based Analysis of a Consensus Protocol
The Paxos distributed consensus algorithm is a challenging case-study for
standard, vector-based model checking techniques. Due to asynchronous
communication, exhaustive analysis may generate very large state spaces already
for small model instances. In this paper, we show the advantages of graph
transformation as an alternative modelling technique. We model Paxos in a rich
declarative transformation language, featuring (among other things) nested
quantifiers, and we validate our model using the GROOVE model checker, a
graph-based tool that exploits isomorphism as a natural way to prune the state
space via symmetry reductions. We compare the results with those obtained by
the standard model checker Spin on the basis of a vector-based encoding of the
algorithm.Comment: In Proceedings GRAPHITE 2014, arXiv:1407.767
Efficient mining of discriminative molecular fragments
Frequent pattern discovery in structured data is receiving
an increasing attention in many application areas of sciences. However, the computational complexity and the large amount of data to be explored often make the sequential algorithms unsuitable. In this context high performance distributed computing becomes a very interesting and promising approach. In this paper we present a parallel formulation of the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The application is characterized by a highly irregular tree-structured computation. No estimation is available for task workloads, which show a power-law distribution in a wide range. The proposed approach allows dynamic resource aggregation and provides fault and latency tolerance. These features make the distributed application suitable for multi-domain heterogeneous environments, such as computational Grids. The distributed application has been evaluated on the well known National Cancer Institute’s HIV-screening dataset
- …