589 research outputs found
Transferring Robustness for Graph Neural Network Against Poisoning Attacks
Graph neural networks (GNNs) are widely used in many applications. However,
their robustness against adversarial attacks is criticized. Prior studies show
that using unnoticeable modifications on graph topology or nodal features can
significantly reduce the performances of GNNs. It is very challenging to design
robust graph neural networks against poisoning attack and several efforts have
been taken. Existing work aims at reducing the negative impact from adversarial
edges only with the poisoned graph, which is sub-optimal since they fail to
discriminate adversarial edges from normal ones. On the other hand, clean
graphs from similar domains as the target poisoned graph are usually available
in the real world. By perturbing these clean graphs, we create supervised
knowledge to train the ability to detect adversarial edges so that the
robustness of GNNs is elevated. However, such potential for clean graphs is
neglected by existing work. To this end, we investigate a novel problem of
improving the robustness of GNNs against poisoning attacks by exploring clean
graphs. Specifically, we propose PA-GNN, which relies on a penalized
aggregation mechanism that directly restrict the negative impact of adversarial
edges by assigning them lower attention coefficients. To optimize PA-GNN for a
poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to
penalize perturbations using clean graphs and their adversarial counterparts,
and transfers such ability to improve the robustness of PA-GNN on the poisoned
graph. Experimental results on four real-world datasets demonstrate the
robustness of PA-GNN against poisoning attacks on graphs. Code and data are
available here: https://github.com/tangxianfeng/PA-GNN.Comment: Accepted by WSDM 2020. Code and data:
https://github.com/tangxianfeng/PA-GN
Stealing Links from Graph Neural Networks
Graph data, such as chemical networks and social networks, may be deemed
confidential/private because the data owner often spends lots of resources
collecting the data or the data contains sensitive information, e.g., social
relationships. Recently, neural networks were extended to graph data, which are
known as graph neural networks (GNNs). Due to their superior performance, GNNs
have many applications, such as healthcare analytics, recommender systems, and
fraud detection. In this work, we propose the first attacks to steal a graph
from the outputs of a GNN model that is trained on the graph. Specifically,
given a black-box access to a GNN model, our attacks can infer whether there
exists a link between any pair of nodes in the graph used to train the model.
We call our attacks link stealing attacks. We propose a threat model to
systematically characterize an adversary's background knowledge along three
dimensions which in total leads to a comprehensive taxonomy of 8 different link
stealing attacks. We propose multiple novel methods to realize these 8 attacks.
Extensive experiments on 8 real-world datasets show that our attacks are
effective at stealing links, e.g., AUC (area under the ROC curve) is above 0.95
in multiple cases. Our results indicate that the outputs of a GNN model reveal
rich information about the structure of the graph used to train the model.Comment: To appear in the 30th Usenix Security Symposium, August 2021,
Vancouver, B.C., Canad
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks
Graph Neural Networks (GNNs), a generalization of neural networks to
graph-structured data, are often implemented using message passes between
entities of a graph. While GNNs are effective for node classification, link
prediction and graph classification, they are vulnerable to adversarial
attacks, i.e., a small perturbation to the structure can lead to a non-trivial
performance degradation. In this work, we propose Uncertainty Matching GNN
(UM-GNN), that is aimed at improving the robustness of GNN models, particularly
against poisoning attacks to the graph structure, by leveraging epistemic
uncertainties from the message passing framework. More specifically, we propose
to build a surrogate predictor that does not directly access the graph
structure, but systematically extracts reliable knowledge from a standard GNN
through a novel uncertainty-matching strategy. Interestingly, this uncoupling
makes UM-GNN immune to evasion attacks by design, and achieves significantly
improved robustness against poisoning attacks. Using empirical studies with
standard benchmarks and a suite of global and target attacks, we demonstrate
the effectiveness of UM-GNN, when compared to existing baselines including the
state-of-the-art robust GCN
Camouflaged Poisoning Attack on Graph Neural Networks
Graph neural networks (GNNs) have enabled the automation of many web applications that entail node classification on graphs, such as scam detection in social media and event prediction in service networks. Nevertheless, recent studies revealed that the GNNs are vulnerable to adversarial attacks, where feeding GNNs with poisoned data at training time can lead them to yield catastrophically devastative test accuracy. This finding heats up the frontier of attacks and defenses against GNNs. However, the prior studies mainly posit that the adversaries can enjoy free access to manipulate the original graph, while obtaining such access could be too costly in practice. To fill this gap, we propose a novel attacking paradigm, named Generative Adversarial Fake Node Camouflaging (GAFNC), with its crux lying in crafting a set of fake nodes in a generative-adversarial regime. These nodes carry camouflaged malicious features and can poison the victim GNN by passing their malicious messages to the original graph via learned topological structures, such that they 1) maximize the devastation of classification accuracy (i.e., global attack) or 2) enforce the victim GNN to misclassify a targeted node set into prescribed classes (i.e., target attack). We benchmark our experiments on four real-world graph datasets, and the results substantiate the viability, effectiveness, and stealthiness of our proposed poisoning attack approach. Code is released in github.com/chao92/GAFNC
Are Defenses for Graph Neural Networks Robust?
A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw – virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering – most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks. Moreover, our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness
- …