730 research outputs found
Distances and Isomorphism between Networks and the Stability of Network Invariants
We develop the theoretical foundations of a network distance that has
recently been applied to various subfields of topological data analysis, namely
persistent homology and hierarchical clustering. While this network distance
has previously appeared in the context of finite networks, we extend the
setting to that of compact networks. The main challenge in this new setting is
the lack of an easy notion of sampling from compact networks; we solve this
problem in the process of obtaining our results. The generality of our setting
means that we automatically establish results for exotic objects such as
directed metric spaces and Finsler manifolds. We identify readily computable
network invariants and establish their quantitative stability under this
network distance. We also discuss the computational complexity involved in
precisely computing this distance, and develop easily-computable lower bounds
by using the identified invariants. By constructing a wide range of explicit
examples, we show that these lower bounds are effective in distinguishing
between networks. Finally, we provide a simple algorithm that computes a lower
bound on the distance between two networks in polynomial time and illustrate
our metric and invariant constructions on a database of random networks and a
database of simulated hippocampal networks
PlanE: Representation Learning over Planar Graphs
Graph neural networks are prominent models for representation learning over
graphs, where the idea is to iteratively compute representations of nodes of an
input graph through a series of transformations in such a way that the learned
graph function is isomorphism invariant on graphs, which makes the learned
representations graph invariants. On the other hand, it is well-known that
graph invariants learned by these class of models are incomplete: there are
pairs of non-isomorphic graphs which cannot be distinguished by standard graph
neural networks. This is unsurprising given the computational difficulty of
graph isomorphism testing on general graphs, but the situation begs to differ
for special graph classes, for which efficient graph isomorphism testing
algorithms are known, such as planar graphs. The goal of this work is to design
architectures for efficiently learning complete invariants of planar graphs.
Inspired by the classical planar graph isomorphism algorithm of Hopcroft and
Tarjan, we propose PlanE as a framework for planar representation learning.
PlanE includes architectures which can learn complete invariants over planar
graphs while remaining practically scalable. We empirically validate the strong
performance of the resulting model architectures on well-known planar graph
benchmarks, achieving multiple state-of-the-art results.Comment: Proceedings of the Thirty-Seventh Annual Conference on Advances in
Neural Information Processing Systems (NeurIPS 2023). Code and data available
at: https://github.com/ZZYSonny/Plan
Coupling of quantum angular momenta: an insight into analogic/discrete and local/global models of computation
In the past few years there has been a tumultuous activity aimed at
introducing novel conceptual schemes for quantum computing. The approach
proposed in (Marzuoli A and Rasetti M 2002, 2005a) relies on the (re)coupling
theory of SU(2) angular momenta and can be viewed as a generalization to
arbitrary values of the spin variables of the usual quantum-circuit model based
on `qubits' and Boolean gates. Computational states belong to
finite-dimensional Hilbert spaces labelled by both discrete and continuous
parameters, and unitary gates may depend on quantum numbers ranging over finite
sets of values as well as continuous (angular) variables. Such a framework is
an ideal playground to discuss discrete (digital) and analogic computational
processes, together with their relationships occuring when a consistent
semiclassical limit takes place on discrete quantum gates. When working with
purely discrete unitary gates, the simulator is naturally modelled as families
of quantum finite states--machines which in turn represent discrete versions of
topological quantum computation models. We argue that our model embodies a sort
of unifying paradigm for computing inspired by Nature and, even more
ambitiously, a universal setting in which suitably encoded quantum symbolic
manipulations of combinatorial, topological and algebraic problems might find
their `natural' computational reference model.Comment: 17 pages, 1 figure; Workshop `Natural processes and models of
computation' Bologna (Italy) June 16-18 2005; to appear in Natural Computin
Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting
While Graph Neural Networks (GNNs) have achieved remarkable results in a
variety of applications, recent studies exposed important shortcomings in their
ability to capture the structure of the underlying graph. It has been shown
that the expressive power of standard GNNs is bounded by the Weisfeiler-Leman
(WL) graph isomorphism test, from which they inherit proven limitations such as
the inability to detect and count graph substructures. On the other hand, there
is significant empirical evidence, e.g. in network science and bioinformatics,
that substructures are often intimately related to downstream tasks. To this
end, we propose "Graph Substructure Networks" (GSN), a topologically-aware
message passing scheme based on substructure encoding. We theoretically analyse
the expressive power of our architecture, showing that it is strictly more
expressive than the WL test, and provide sufficient conditions for
universality. Importantly, we do not attempt to adhere to the WL hierarchy;
this allows us to retain multiple attractive properties of standard GNNs such
as locality and linear network complexity, while being able to disambiguate
even hard instances of graph isomorphism. We perform an extensive experimental
evaluation on graph classification and regression tasks and obtain
state-of-the-art results in diverse real-world settings including molecular
graphs and social networks. The code is publicly available at
https://github.com/gbouritsas/graph-substructure-networks
Spectral Reconstruction and Isomorphism of graphs using variable neighbourhood search
The Euclidean distance between the eigenvalue sequences of graphs G and H, on the same number of vertices, is called the spectral distance between G and H. This notion is the basis of a heuristic algorithm for reconstructing a graph with prescribed spectrum. By using a graph Γ constructed from cospectral graphs G and H, we can ensure that G and H are isomorphic if and only if the spectral distance between Γ and G+K2 is zero. This construction is exploited to design a heuristic algorithm for testing graph isomorphism. We present preliminary experimental results obtained by implementing these algorithms in conjunction with a meta-heuristic known as a variable neighbourhood search
Machine learning and invariant theory
Inspired by constraints from physical law, equivariant machine learning
restricts the learning to a hypothesis class where all the functions are
equivariant with respect to some group action. Irreducible representations or
invariant theory are typically used to parameterize the space of such
functions. In this article, we introduce the topic and explain a couple of
methods to explicitly parameterize equivariant functions that are being used in
machine learning applications. In particular, we explicate a general procedure,
attributed to Malgrange, to express all polynomial maps between linear spaces
that are equivariant under the action of a group , given a characterization
of the invariant polynomials on a bigger space. The method also parametrizes
smooth equivariant maps in the case that is a compact Lie group
- …