1,166 research outputs found
RecAD: Towards A Unified Library for Recommender Attack and Defense
In recent years, recommender systems have become a ubiquitous part of our
daily lives, while they suffer from a high risk of being attacked due to the
growing commercial and social values. Despite significant research progress in
recommender attack and defense, there is a lack of a widely-recognized
benchmarking standard in the field, leading to unfair performance comparison
and limited credibility of experiments. To address this, we propose RecAD, a
unified library aiming at establishing an open benchmark for recommender attack
and defense. RecAD takes an initial step to set up a unified benchmarking
pipeline for reproducible research by integrating diverse datasets, standard
source codes, hyper-parameter settings, running logs, attack knowledge, attack
budget, and evaluation results. The benchmark is designed to be comprehensive
and sustainable, covering both attack, defense, and evaluation tasks, enabling
more researchers to easily follow and contribute to this promising field. RecAD
will drive more solid and reproducible research on recommender systems attack
and defense, reduce the redundant efforts of researchers, and ultimately
increase the credibility and practical value of recommender attack and defense.
The project is released at https://github.com/gusye1234/recad
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability
Graph Neural Networks (GNNs) have made rapid developments in the recent
years. Due to their great ability in modeling graph-structured data, GNNs are
vastly used in various applications, including high-stakes scenarios such as
financial analysis, traffic predictions, and drug discovery. Despite their
great potential in benefiting humans in the real world, recent study shows that
GNNs can leak private information, are vulnerable to adversarial attacks, can
inherit and magnify societal bias from training data and lack interpretability,
which have risk of causing unintentional harm to the users and society. For
example, existing works demonstrate that attackers can fool the GNNs to give
the outcome they desire with unnoticeable perturbation on training graph. GNNs
trained on social networks may embed the discrimination in their decision
process, strengthening the undesirable societal bias. Consequently, trustworthy
GNNs in various aspects are emerging to prevent the harm from GNN models and
increase the users' trust in GNNs. In this paper, we give a comprehensive
survey of GNNs in the computational aspects of privacy, robustness, fairness,
and explainability. For each aspect, we give the taxonomy of the related
methods and formulate the general frameworks for the multiple categories of
trustworthy GNNs. We also discuss the future research directions of each aspect
and connections between these aspects to help achieve trustworthiness
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks
Graph Neural Networks (GNNs), a generalization of neural networks to
graph-structured data, are often implemented using message passes between
entities of a graph. While GNNs are effective for node classification, link
prediction and graph classification, they are vulnerable to adversarial
attacks, i.e., a small perturbation to the structure can lead to a non-trivial
performance degradation. In this work, we propose Uncertainty Matching GNN
(UM-GNN), that is aimed at improving the robustness of GNN models, particularly
against poisoning attacks to the graph structure, by leveraging epistemic
uncertainties from the message passing framework. More specifically, we propose
to build a surrogate predictor that does not directly access the graph
structure, but systematically extracts reliable knowledge from a standard GNN
through a novel uncertainty-matching strategy. Interestingly, this uncoupling
makes UM-GNN immune to evasion attacks by design, and achieves significantly
improved robustness against poisoning attacks. Using empirical studies with
standard benchmarks and a suite of global and target attacks, we demonstrate
the effectiveness of UM-GNN, when compared to existing baselines including the
state-of-the-art robust GCN
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural Network
Federated Graph Neural Network (FedGNN) has recently emerged as a rapidly
growing research topic, as it integrates the strengths of graph neural networks
and federated learning to enable advanced machine learning applications without
direct access to sensitive data. Despite its advantages, the distributed nature
of FedGNN introduces additional vulnerabilities, particularly backdoor attacks
stemming from malicious participants. Although graph backdoor attacks have been
explored, the compounded complexity introduced by the combination of GNNs and
federated learning has hindered a comprehensive understanding of these attacks,
as existing research lacks extensive benchmark coverage and in-depth analysis
of critical factors. To address these limitations, we propose Bkd-FedGNN, a
benchmark for backdoor attacks on FedGNN. Specifically, Bkd-FedGNN decomposes
the graph backdoor attack into trigger generation and injection steps, and
extending the attack to the node-level federated setting, resulting in a
unified framework that covers both node-level and graph-level classification
tasks. Moreover, we thoroughly investigate the impact of multiple critical
factors in backdoor attacks on FedGNN. These factors are categorized into
global-level and local-level factors, including data distribution, the number
of malicious attackers, attack time, overlapping rate, trigger size, trigger
type, trigger position, and poisoning rate. Finally, we conduct comprehensive
evaluations on 13 benchmark datasets and 13 critical factors, comprising 1,725
experimental configurations for node-level and graph-level tasks from six
domains. These experiments encompass over 8,000 individual tests, allowing us
to provide a thorough evaluation and insightful observations that advance our
understanding of backdoor attacks on FedGNN.The Bkd-FedGNN benchmark is
publicly available at https://github.com/usail-hkust/BkdFedGCN
Deceptive Fairness Attacks on Graphs via Meta Learning
We study deceptive fairness attacks on graphs to answer the following
question: How can we achieve poisoning attacks on a graph learning model to
exacerbate the bias deceptively? We answer this question via a bi-level
optimization problem and propose a meta learning-based framework named FATE.
FATE is broadly applicable with respect to various fairness definitions and
graph learning models, as well as arbitrary choices of manipulation operations.
We further instantiate FATE to attack statistical parity and individual
fairness on graph neural networks. We conduct extensive experimental
evaluations on real-world datasets in the task of semi-supervised node
classification. The experimental results demonstrate that FATE could amplify
the bias of graph neural networks with or without fairness consideration while
maintaining the utility on the downstream task. We hope this paper provides
insights into the adversarial robustness of fair graph learning and can shed
light on designing robust and fair graph learning in future studies.Comment: 23 pages, 11 table
Robust Graph Neural Networks using Weighted Graph Laplacian
Graph neural network (GNN) is achieving remarkable performances in a variety
of application domains. However, GNN is vulnerable to noise and adversarial
attacks in input data. Making GNN robust against noises and adversarial attacks
is an important problem. The existing defense methods for GNNs are
computationally demanding and are not scalable. In this paper, we propose a
generic framework for robustifying GNN known as Weighted Laplacian GNN
(RWL-GNN). The method combines Weighted Graph Laplacian learning with the GNN
implementation. The proposed method benefits from the positive
semi-definiteness property of Laplacian matrix, feature smoothness, and latent
features via formulating a unified optimization framework, which ensures the
adversarial/noisy edges are discarded and connections in the graph are
appropriately weighted. For demonstration, the experiments are conducted with
Graph convolutional neural network(GCNN) architecture, however, the proposed
framework is easily amenable to any existing GNN architecture. The simulation
results with benchmark dataset establish the efficacy of the proposed method,
both in accuracy and computational efficiency. Code can be accessed at
https://github.com/Bharat-Runwal/RWL-GNN.Comment: Accepted at IEEE International Conference on Signal Processing and
Communications (SPCOM), 202
- …