We study the convergence of message passing graph neural networks on random
graph models to their continuous counterpart as the number of nodes tends to
infinity. Until now, this convergence was only known for architectures with
aggregation functions in the form of degree-normalized means. We extend such
results to a very large class of aggregation functions, that encompasses all
classically used message passing graph neural networks, such as attention-based
mesage passing or max convolutional message passing on top of
(degree-normalized) convolutional message passing. Under mild assumptions, we
give non asymptotic bounds with high probability to quantify this convergence.
Our main result is based on the McDiarmid inequality. Interestingly, we treat
the case where the aggregation is a coordinate-wise maximum separately, at it
necessitates a very different proof technique and yields a qualitatively
different convergence rate