16 research outputs found
Generalized Graphon Process: Convergence of Graph Frequencies in Stretched Cut Distance
Graphons have traditionally served as limit objects for dense graph
sequences, with the cut distance serving as the metric for convergence.
However, sparse graph sequences converge to the trivial graphon under the
conventional definition of cut distance, which make this framework inadequate
for many practical applications. In this paper, we utilize the concepts of
generalized graphons and stretched cut distance to describe the convergence of
sparse graph sequences. Specifically, we consider a random graph process
generated from a generalized graphon. This random graph process converges to
the generalized graphon in stretched cut distance. We use this random graph
process to model the growing sparse graph, and prove the convergence of the
adjacency matrices' eigenvalues. We supplement our findings with experimental
validation. Our results indicate the possibility of transfer learning between
sparse graphs
On the Stability of Graph Convolutional Neural Networks under Edge Rewiring
Graph neural networks are experiencing a surge of popularity within the
machine learning community due to their ability to adapt to non-Euclidean
domains and instil inductive biases. Despite this, their stability, i.e., their
robustness to small perturbations in the input, is not yet well understood.
Although there exists some results showing the stability of graph neural
networks, most take the form of an upper bound on the magnitude of change due
to a perturbation in the graph topology. However, the change in the graph
topology captured in existing bounds tend not to be expressed in terms of
structural properties, limiting our understanding of the model robustness
properties. In this work, we develop an interpretable upper bound elucidating
that graph neural networks are stable to rewiring between high degree nodes.
This bound and further research in bounds of similar type provide further
understanding of the stability properties of graph neural networks.Comment: To appear at the 46th International Conference on Acoustics, Speech
and Signal Processing (ICASSP 2021
On the stability of graph convolutional neural networks under edge rewiring
Graph neural networks are experiencing a surge of popularity within the machine learning community due to their ability to adapt to non-Euclidean domains and instil inductive biases. Despite this, their stability, i.e., their robustness to small perturbations in the input, is not yet well understood. Although there exists some results showing the stability of graph neural networks, most take the form of an upper bound on the magnitude of change due to a perturbation in the graph topology. However, the change in the graph topology captured in existing bounds tend not to be expressed in terms of structural properties, limiting our understanding of the model robustness properties. In this work, we develop an interpretable upper bound elucidating that graph neural networks are stable to rewiring between high degree nodes. This bound and further research in bounds of similar type provide further understanding of the stability properties of graph neural networks
Recommended from our members
Mathematical Foundations of Machine Learning (hybrid meeting)
Machine learning has achieved
remarkable successes in various applications, but there is wide agreement that a mathematical theory for deep learning is missing. Recently, some first mathematical results have been derived in different areas such as mathematical statistics and statistical learning. Any mathematical theory of machine learning will have to combine tools from different fields such as nonparametric statistics, high-dimensional statistics, empirical process theory and approximation theory. The main objective of the workshop was to bring together leading researchers contributing to the mathematics of machine learning.
A focus of the workshop was on theory for deep neural networks. Mathematically speaking, neural networks define function classes with a rich mathematical structure that are extremely difficult to analyze because of non-linearity in the parameters. Until very recently, most existing theoretical results could not cope with many of the distinctive characteristics of deep networks such as multiple hidden layers or the ReLU activation function. Other topics of the workshop are procedures for quantifying the uncertainty of machine learning methods and the mathematics of data privacy