Graph contrastive learning (GCL) has recently emerged as a promising approach
for graph representation learning. Some existing methods adopt the 1-vs-K
scheme to construct one positive and K negative samples for each graph, but it
is difficult to set K. For those methods that do not use negative samples, it
is often necessary to add additional strategies to avoid model collapse, which
could only alleviate the problem to some extent. All these drawbacks will
undoubtedly have an adverse impact on the generalizability and efficiency of
the model. In this paper, to address these issues, we propose a novel graph
self-contrast framework GraphSC, which only uses one positive and one negative
sample, and chooses triplet loss as the objective. Specifically, self-contrast
has two implications. First, GraphSC generates both positive and negative views
of a graph sample from the graph itself via graph augmentation functions of
various intensities, and use them for self-contrast. Second, GraphSC uses
Hilbert-Schmidt Independence Criterion (HSIC) to factorize the representations
into multiple factors and proposes a masked self-contrast mechanism to better
separate positive and negative samples. Further, Since the triplet loss only
optimizes the relative distance between the anchor and its positive/negative
samples, it is difficult to ensure the absolute distance between the anchor and
positive sample. Therefore, we explicitly reduced the absolute distance between
the anchor and positive sample to accelerate convergence. Finally, we conduct
extensive experiments to evaluate the performance of GraphSC against 19 other
state-of-the-art methods in both unsupervised and transfer learning settings.Comment: ICDM 2023(Regular