Anomaly detection via graph contrastive learning

Abstract

Graph contrastive learning (GCL) techniques have shown superior performance in many tasks, such as social networks and recommendation systems, which makes them good solution candidates for detecting anomalies more accurately. The existing noncontrastive learning-based approaches do not fully take into account dynamic agents that can camouflage themselves. These agents either establish frequent associations with regular objects or intentionally skip forming relationships with the remaining objects, which are called head and tail anomalies respectively. In terms of graph topology, such agent behavior makes the graph more imbalanced. To handle both types of anomalies, we come up with a novel ensemble graph contrastive learning-based GCAD (Graph Contrasted Anomaly Detection) which is an ensemble of two approaches: 1- We learn representations and embeddings by leveraging Siamese architecture, which learns to minimize/maximize similarity between graph pairs at different scales while capturing their hierarchical structure. 2- We integrate a self-supervised learning framework using graph augmentations (like node and edge dropout) and contrastive learning to learn robust graph embeddings. In this case, the main idea is to generate multiple views of a graph using augmentations and then maximize the agreement between these views using contrastive loss. We show our approach outperforms the competing approaches in detecting both tail and head anomalies across 6 different datasets from citation and finance domains. The ablation studies also show the importance of GCAD components as well as its robustness

Similar works

Full text

thumbnail-image

eResearch@Ozyegin

redirect
Last time updated on 21/01/2026

This paper was published in eResearch@Ozyegin.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.