2 research outputs found
Membership Inference Attack on Graph Neural Networks
Graph Neural Networks (GNNs), which generalize traditional deep neural
networks on graph data, have achieved state-of-the-art performance on several
graph analytical tasks. We focus on how trained GNN models could leak
information about the \emph{member} nodes that they were trained on. We
introduce two realistic settings for performing a membership inference (MI)
attack on GNNs. While choosing the simplest possible attack model that utilizes
the posteriors of the trained model (black-box access), we thoroughly analyze
the properties of GNNs and the datasets which dictate the differences in their
robustness towards MI attack. While in traditional machine learning models,
overfitting is considered the main cause of such leakage, we show that in GNNs
the additional structural information is the major contributing factor. We
support our findings by extensive experiments on four representative GNN
models. To prevent MI attacks on GNN, we propose two effective defenses that
significantly decreases the attacker's inference by up to 60% without
degradation to the target model's performance. Our code is available at
https://github.com/iyempissy/rebMIGraph.Comment: Best student paper award, IEEE TPS 2
Locally Private Graph Neural Networks
Graph Neural Networks (GNNs) have demonstrated superior performance in
learning node representations for various graph inference tasks. However,
learning over graph data can raise privacy concerns when nodes represent people
or human-related variables that involve sensitive or personal information.
While numerous techniques have been proposed for privacy-preserving deep
learning over non-relational data, there is less work addressing the privacy
issues pertained to applying deep learning algorithms on graphs. In this paper,
we study the problem of node data privacy, where graph nodes have potentially
sensitive data that is kept private, but they could be beneficial for a central
server for training a GNN over the graph. To address this problem, we develop a
privacy-preserving, architecture-agnostic GNN learning algorithm with formal
privacy guarantees based on Local Differential Privacy (LDP). Specifically, we
propose an LDP encoder and an unbiased rectifier, by which the server can
communicate with the graph nodes to privately collect their data and
approximate the GNN's first layer. To further reduce the effect of the injected
noise, we propose to prepend a simple graph convolution layer, called KProp,
which is based on the multi-hop aggregation of the nodes' features acting as a
denoising mechanism. Finally, we propose a robust training framework, in which
we benefit from KProp's denoising capability to increase the accuracy of
inference in the presence of noisy labels. Extensive experiments conducted over
real-world datasets demonstrate that our method can maintain a satisfying level
of accuracy with low privacy loss.Comment: Accepted at ACM CCS 202