1 research outputs found
On the Adversarial Robustness of Graph Contrastive Learning Methods
Contrastive learning (CL) has emerged as a powerful framework for learning
representations of images and text in a self-supervised manner while enhancing
model robustness against adversarial attacks. More recently, researchers have
extended the principles of contrastive learning to graph-structured data,
giving birth to the field of graph contrastive learning (GCL). However, whether
GCL methods can deliver the same advantages in adversarial robustness as their
counterparts in the image and text domains remains an open question. In this
paper, we introduce a comprehensive robustness evaluation protocol tailored to
assess the robustness of GCL models. We subject these models to adaptive
adversarial attacks targeting the graph structure, specifically in the evasion
scenario. We evaluate node and graph classification tasks using diverse
real-world datasets and attack strategies. With our work, we aim to offer
insights into the robustness of GCL methods and hope to open avenues for
potential future research directions.Comment: Accepted at NeurIPS 2023 New Frontiers in Graph Learning Workshop
(NeurIPS GLFrontiers 2023