1,601 research outputs found
Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning
Graph Neural Networks (GNNs) have drawn significant attentions over the years
and been broadly applied to essential applications requiring solid robustness
or vigorous security standards, such as product recommendation and user
behavior modeling. Under these scenarios, exploiting GNN's vulnerabilities and
further downgrading its performance become extremely incentive for adversaries.
Previous attackers mainly focus on structural perturbations or node injections
to the existing graphs, guided by gradients from the surrogate models. Although
they deliver promising results, several limitations still exist. For the
structural perturbation attack, to launch a proposed attack, adversaries need
to manipulate the existing graph topology, which is impractical in most
circumstances. Whereas for the node injection attack, though being more
practical, current approaches require training surrogate models to simulate a
white-box setting, which results in significant performance downgrade when the
surrogate architecture diverges from the actual victim model. To bridge these
gaps, in this paper, we study the problem of black-box node injection attack,
without training a potentially misleading surrogate model. Specifically, we
model the node injection attack as a Markov decision process and propose
Gradient-free Graph Advantage Actor Critic, namely G2A2C, a reinforcement
learning framework in the fashion of advantage actor critic. By directly
querying the victim model, G2A2C learns to inject highly malicious nodes with
extremely limited attacking budgets, while maintaining a similar node feature
distribution. Through our comprehensive experiments over eight acknowledged
benchmark datasets with different characteristics, we demonstrate the superior
performance of our proposed G2A2C over the existing state-of-the-art attackers.
Source code is publicly available at: https://github.com/jumxglhf/G2A2C}.Comment: AAAI 2023. v2: update acknowledgement section. arXiv admin note:
substantial text overlap with arXiv:2202.0938
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability
Graph Neural Networks (GNNs) have made rapid developments in the recent
years. Due to their great ability in modeling graph-structured data, GNNs are
vastly used in various applications, including high-stakes scenarios such as
financial analysis, traffic predictions, and drug discovery. Despite their
great potential in benefiting humans in the real world, recent study shows that
GNNs can leak private information, are vulnerable to adversarial attacks, can
inherit and magnify societal bias from training data and lack interpretability,
which have risk of causing unintentional harm to the users and society. For
example, existing works demonstrate that attackers can fool the GNNs to give
the outcome they desire with unnoticeable perturbation on training graph. GNNs
trained on social networks may embed the discrimination in their decision
process, strengthening the undesirable societal bias. Consequently, trustworthy
GNNs in various aspects are emerging to prevent the harm from GNN models and
increase the users' trust in GNNs. In this paper, we give a comprehensive
survey of GNNs in the computational aspects of privacy, robustness, fairness,
and explainability. For each aspect, we give the taxonomy of the related
methods and formulate the general frameworks for the multiple categories of
trustworthy GNNs. We also discuss the future research directions of each aspect
and connections between these aspects to help achieve trustworthiness
Cluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors
While deep neural networks have achieved great success in graph analysis,
recent work has shown that they are vulnerable to adversarial attacks. Compared
with adversarial attacks on image classification, performing adversarial
attacks on graphs is more challenging because of the discrete and
non-differential nature of the adjacent matrix for a graph. In this work, we
propose Cluster Attack -- a Graph Injection Attack (GIA) on node
classification, which injects fake nodes into the original graph to degenerate
the performance of graph neural networks (GNNs) on certain victim nodes while
affecting the other nodes as little as possible. We demonstrate that a GIA
problem can be equivalently formulated as a graph clustering problem; thus, the
discrete optimization problem of the adjacency matrix can be solved in the
context of graph clustering. In particular, we propose to measure the
similarity between victim nodes by a metric of Adversarial Vulnerability, which
is related to how the victim nodes will be affected by the injected fake node,
and to cluster the victim nodes accordingly. Our attack is performed in a
practical and unnoticeable query-based black-box manner with only a few nodes
on the graphs that can be accessed. Theoretical analysis and extensive
experiments demonstrate the effectiveness of our method by fooling the node
classifiers with only a small number of queries.Comment: IJCAI 2022 (Long Presentation
- …