Recently, many works studied the expressive power of graph neural networks
(GNNs) by linking it to the 1-dimensional Weisfeiler--Leman algorithm
(1-WL). Here, the 1-WL is a well-studied
heuristic for the graph isomorphism problem, which iteratively colors or
partitions a graph's vertex set. While this connection has led to significant
advances in understanding and enhancing GNNs' expressive power, it does not
provide insights into their generalization performance, i.e., their ability to
make meaningful predictions beyond the training set. In this paper, we study
GNNs' generalization ability through the lens of Vapnik--Chervonenkis (VC)
dimension theory in two settings, focusing on graph-level predictions. First,
when no upper bound on the graphs' order is known, we show that the bitlength
of GNNs' weights tightly bounds their VC dimension. Further, we derive an upper
bound for GNNs' VC dimension using the number of colors produced by the
1-WL. Secondly, when an upper bound on the graphs' order is
known, we show a tight connection between the number of graphs distinguishable
by the 1-WL and GNNs' VC dimension. Our empirical study
confirms the validity of our theoretical findings.Comment: arXiv admin note: text overlap with arXiv:2206.1116