1,661 research outputs found

    Releasing Graph Neural Networks with Differential Privacy Guarantees

    Full text link
    With the increasing popularity of graph neural networks (GNNs) in several sensitive applications like healthcare and medicine, concerns have been raised over the privacy aspects of trained GNNs. More notably, GNNs are vulnerable to privacy attacks, such as membership inference attacks, even if only black-box access to the trained model is granted. We propose PrivGNN, a privacy-preserving framework for releasing GNN models in a centralized setting. Assuming an access to a public unlabeled graph, PrivGNN provides a framework to release GNN models trained explicitly on public data along with knowledge obtained from the private data in a privacy preserving manner. PrivGNN combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling, to ensure rigorous privacy guarantees. We theoretically analyze our approach in the Renyi differential privacy framework. Besides, we show the solid experimental performance of our method compared to several baselines adapted for graph-structured data. Our code is available at https://github.com/iyempissy/privGnn.Comment: Published in TMLR 202

    Statistical Properties of Height of Japanese Schoolchildren

    Full text link
    We study height distributions of Japanese schoolchildren based on the statictical data which are obtained from the school health survey by the ministry of education, culture, sports, science and technology, Japan . From our analysis, it has been clarified that the distribution of height changes from the lognormal distribution to the normal distribution in the periods of puberty.Comment: 2 pages, 2 figures, submitted to J. Phys. Soc. Jpn.; resubmitted to J. Phys. Soc. Jpn. after some revisio

    Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk?

    Full text link
    Graph neural networks (GNNs) have shown promising results on real-life datasets and applications, including healthcare, finance, and education. However, recent studies have shown that GNNs are highly vulnerable to attacks such as membership inference attack and link reconstruction attack. Surprisingly, attribute inference attacks has received little attention. In this paper, we initiate the first investigation into attribute inference attack where an attacker aims to infer the sensitive user attributes based on her public or non-sensitive attributes. We ask the question whether black-box attribute inference attack constitutes a significant privacy risk for graph-structured data and their corresponding GNN model. We take a systematic approach to launch the attacks by varying the adversarial knowledge and assumptions. Our findings reveal that when an attacker has black-box access to the target model, GNNs generally do not reveal significantly more information compared to missing value estimation techniques. Code is available

    Private Graph Extraction via Feature Explanations

    Full text link
    Privacy and interpretability are two important ingredients for achieving trustworthy machine learning. We study the interplay of these two aspects in graph machine learning through graph reconstruction attacks. The goal of the adversary here is to reconstruct the graph structure of the training data given access to model explanations. Based on the different kinds of auxiliary information available to the adversary, we propose several graph reconstruction attacks. We show that additional knowledge of post-hoc feature explanations substantially increases the success rate of these attacks. Further, we investigate in detail the differences between attack performance with respect to three different classes of explanation methods for graph neural networks: gradient-based, perturbation-based, and surrogate model-based methods. While gradient-based explanations reveal the most in terms of the graph structure, we find that these explanations do not always score high in utility. For the other two classes of explanations, privacy leakage increases with an increase in explanation utility. Finally, we propose a defense based on a randomized response mechanism for releasing the explanations, which substantially reduces the attack success rate. Our code is available at https://github.com/iyempissy/graph-stealing-attacks-with-explanationComment: Accepted at PETS 202

    Contents

    Get PDF
    We introduce the `displacemon' electromechanical architecture that comprises a vibrating nanobeam, e.g. a carbon nanotube, flux coupled to a superconducting qubit. This platform can achieve strong and even ultrastrong coupling enabling a variety of quantum protocols. We use this system to describe a protocol for generating and measuring quantum interference between two trajectories of a nanomechanical resonator. The scheme uses a sequence of qubit manipulations and measurements to cool the resonator, apply an effective diffraction grating, and measure the resulting interference pattern. We simulate the protocol for a realistic system consisting of a vibrating carbon nanotube acting as a junction in a superconducting qubit, and we demonstrate the feasibility of generating a spatially distinct quantum superposition state of motion containing more than 10610^6 nucleons.Comment: 12 pages, 7 figure

    Are You Tampering With My Data?

    Full text link
    We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.Comment: 18 page
    corecore