Graph neural networks (GNNs) have recently emerged as an effective
collaborative filtering (CF) approaches for recommender systems. The key idea
of GNN-based recommender systems is to recursively perform message passing
along user-item interaction edges to refine encoded embeddings, relying on
sufficient and high-quality training data. However, user behavior data in
practical recommendation scenarios is often noisy and exhibits skewed
distribution. To address these issues, some recommendation approaches, such as
SGL, leverage self-supervised learning to improve user representations. These
approaches conduct self-supervised learning through creating contrastive views,
but they depend on the tedious trial-and-error selection of augmentation
methods. In this paper, we propose a novel Adaptive Graph Contrastive Learning
(AdaGCL) framework that conducts data augmentation with two adaptive
contrastive view generators to better empower the CF paradigm. Specifically, we
use two trainable view generators - a graph generative model and a graph
denoising model - to create adaptive contrastive views. With two adaptive
contrastive views, AdaGCL introduces additional high-quality training signals
into the CF paradigm, helping to alleviate data sparsity and noise issues.
Extensive experiments on three real-world datasets demonstrate the superiority
of our model over various state-of-the-art recommendation methods. Our model
implementation codes are available at the link https://github.com/HKUDS/AdaGCL