In contrastive learning, the choice of ``view'' controls the information that
the representation captures and influences the performance of the model.
However, leading graph contrastive learning methods generally produce views via
random corruption or learning, which could lead to the loss of essential
information and alteration of semantic information. An anchor view that
maintains the essential information of input graphs for contrastive learning
has been hardly investigated. In this paper, based on the theory of graph
information bottleneck, we deduce the definition of this anchor view; put
differently, \textit{the anchor view with essential information of input graph
is supposed to have the minimal structural uncertainty}. Furthermore, guided by
structural entropy, we implement the anchor view, termed \textbf{SEGA}, for
graph contrastive learning. We extensively validate the proposed anchor view on
various benchmarks regarding graph classification under unsupervised,
semi-supervised, and transfer learning and achieve significant performance
boosts compared to the state-of-the-art methods.Comment: ICML'2