Current graph neural networks (GNNs) that tackle node classification on
graphs tend to only focus on nodewise scores and are solely evaluated by
nodewise metrics. This limits uncertainty estimation on graphs since nodewise
marginals do not fully characterize the joint distribution given the graph
structure. In this work, we propose novel edgewise metrics, namely the edgewise
expected calibration error (ECE) and the agree/disagree ECEs, which provide
criteria for uncertainty estimation on graphs beyond the nodewise setting. Our
experiments demonstrate that the proposed edgewise metrics can complement the
nodewise results and yield additional insights. Moreover, we show that GNN
models which consider the structured prediction problem on graphs tend to have
better uncertainty estimations, which illustrates the benefit of going beyond
the nodewise setting.Comment: Presented at NeurIPS 2022 New Frontiers in Graph Learning Workshop
(NeurIPS GLFrontiers 2022