In recent years, contrastive learning has emerged as a dominant
self-supervised paradigm, attracting numerous research interests in the field
of graph learning. Graph contrastive learning (GCL) aims to embed augmented
anchor samples close to each other while pushing the embeddings of other
samples (negative samples) apart. However, existing GCL methods require large
and diverse negative samples to ensure the quality of embeddings, and recent
studies typically leverage samples excluding the anchor and positive samples as
negative samples, potentially introducing false negative samples (negatives
that share the same class as the anchor). Additionally, this practice can
result in heavy computational burden and high time complexity of O(N2),
which is particularly unaffordable for large graphs. To address these
deficiencies, we leverage rank learning and propose a simple yet effective
model, GraphRank. Specifically, we first generate two graph views through
corruption. Then, we compute the similarity of pairwise nodes (anchor node and
positive node) in both views, an arbitrary node in the latter view is selected
as a negative node, and its similarity with the anchor node is computed. Based
on this, we introduce rank-based learning to measure similarity scores which
successfully relieve the false negative provlem and decreases the time
complexity from O(N2) to O(N). Moreover, we conducted extensive
experiments across multiple graph tasks, demonstrating that GraphRank performs
favorably against other cutting-edge GCL methods in various tasks