Graph neural networks (GNNs) update the hidden representations of vertices
(called Vertex-GNNs) or hidden representations of edges (called Edge-GNNs) by
processing and pooling the information of neighboring vertices and edges and
combining to incorporate graph topology. When learning resource allocation
policies, GNNs cannot perform well if their expressive power are weak, i.e., if
they cannot differentiate all input features such as channel matrices. In this
paper, we analyze the expressive power of the Vertex-GNNs and Edge-GNNs for
learning three representative wireless policies: link scheduling, power
control, and precoding policies. We find that the expressive power of the GNNs
depend on the linearity and output dimensions of the processing and combination
functions. When linear processors are used, the Vertex-GNNs cannot
differentiate all channel matrices due to the loss of channel information,
while the Edge-GNNs can. When learning the precoding policy, even the
Vertex-GNNs with non-linear processors may not be with strong expressive
ability due to the dimension compression. We proceed to provide necessary
conditions for the GNNs to well learn the precoding policy. Simulation results
validate the analyses and show that the Edge-GNNs can achieve the same
performance as the Vertex-GNNs with much lower training and inference time