294,335 research outputs found
On the Differential Privacy of Bayesian Inference
We study how to communicate findings of Bayesian inference to third parties,
while preserving the strong guarantee of differential privacy. Our main
contributions are four different algorithms for private Bayesian inference on
proba-bilistic graphical models. These include two mechanisms for adding noise
to the Bayesian updates, either directly to the posterior parameters, or to
their Fourier transform so as to preserve update consistency. We also utilise a
recently introduced posterior sampling mechanism, for which we prove bounds for
the specific but general case of discrete Bayesian networks; and we introduce a
maximum-a-posteriori private mechanism. Our analysis includes utility and
privacy bounds, with a novel focus on the influence of graph structure on
privacy. Worked examples and experiments with Bayesian na{\"i}ve Bayes and
Bayesian linear regression illustrate the application of our mechanisms.Comment: AAAI 2016, Feb 2016, Phoenix, Arizona, United State
Crypto'Graph: Leveraging Privacy-Preserving Distributed Link Prediction for Robust Graph Learning
Graphs are a widely used data structure for collecting and analyzing
relational data. However, when the graph structure is distributed across
several parties, its analysis is particularly challenging. In particular, due
to the sensitivity of the data each party might want to keep their partial
knowledge of the graph private, while still willing to collaborate with the
other parties for tasks of mutual benefit, such as data curation or the removal
of poisoned data. To address this challenge, we propose Crypto'Graph, an
efficient protocol for privacy-preserving link prediction on distributed
graphs. More precisely, it allows parties partially sharing a graph with
distributed links to infer the likelihood of formation of new links in the
future. Through the use of cryptographic primitives, Crypto'Graph is able to
compute the likelihood of these new links on the joint network without
revealing the structure of the private individual graph of each party, even
though they know the number of nodes they have, since they share the same graph
but not the same links. Crypto'Graph improves on previous works by enabling the
computation of a certain number of similarity metrics without any additional
cost. The use of Crypto'Graph is illustrated for defense against graph
poisoning attacks, in which it is possible to identify potential adversarial
links without compromising the privacy of the graphs of individual parties. The
effectiveness of Crypto'Graph in mitigating graph poisoning attacks and
achieving high prediction accuracy on a graph neural network node
classification task is demonstrated through extensive experimentation on a
real-world dataset
Graph Analysis in Decentralized Online Social Networks with Fine-Grained Privacy Protection
Graph analysts cannot directly obtain the global structure in decentralized
social networks, and analyzing such a network requires collecting local views
of the social graph from individual users. Since the edges between users may
reveal sensitive social interactions in the local view, applying differential
privacy in the data collection process is often desirable, which provides
strong and rigorous privacy guarantees. In practical decentralized social
graphs, different edges have different privacy requirements due to the distinct
sensitivity levels. However, the existing differentially private analysis of
social graphs provide the same protection for all edges. To address this issue,
this work proposes a fine-grained privacy notion as well as novel algorithms
for private graph analysis. We first design a fine-grained relationship
differential privacy (FGR-DP) notion for social graph analysis, which enforces
different protections for the edges with distinct privacy requirements. Then,
we design algorithms for triangle counting and k-stars counting, respectively,
which can accurately estimate subgraph counts given fine-grained protection for
social edges. We also analyze upper bounds on the estimation error, including
k-stars and triangle counts, and show their superior performance compared with
the state-of-the-arts. Finally, we perform extensive experiments on two real
social graph datasets and demonstrate that the proposed mechanisms satisfying
FGR-DP have better utility than the state-of-the-art mechanisms due to the
finer-grained protection
Differentially Private Decoupled Graph Convolutions for Multigranular Topology Protection
Graph learning methods, such as Graph Neural Networks (GNNs) based on graph
convolutions, are highly successful in solving real-world learning problems
involving graph-structured data. However, graph learning methods expose
sensitive user information and interactions not only through their model
parameters but also through their model predictions. Consequently, standard
Differential Privacy (DP) techniques that merely offer model weight privacy are
inadequate. This is especially the case for node predictions that leverage
neighboring node attributes directly via graph convolutions that create
additional risks of privacy leakage. To address this problem, we introduce
Graph Differential Privacy (GDP), a new formal DP framework tailored to graph
learning settings that ensures both provably private model parameters and
predictions. Furthermore, since there may be different privacy requirements for
the node attributes and graph structure, we introduce a novel notion of relaxed
node-level data adjacency. This relaxation can be used for establishing
guarantees for different degrees of graph topology privacy while maintaining
node attribute privacy. Importantly, this relaxation reveals a useful trade-off
between utility and topology privacy for graph learning methods. In addition,
our analysis of GDP reveals that existing DP-GNNs fail to exploit this
trade-off due to the complex interplay between graph topology and attribute
data in standard graph convolution designs. To mitigate this problem, we
introduce the Differentially Private Decoupled Graph Convolution (DPDGC) model,
which benefits from decoupled graph convolution while providing GDP guarantees.
Extensive experiments on seven node classification benchmarking datasets
demonstrate the superior privacy-utility trade-off of DPDGC over existing
DP-GNNs based on standard graph convolution design
Review Paper-Social networking with protecting sensitive labels in data Anonymization
The use of social network sites goes on increasing such as facebook, twitter, linkedin, live journal social network and wiki vote network. By using this, users find that they can obtain more and more useful information such as the user performance, private growth, dispersal of disease etc. It is also important that users private information should not get disclose. Thus, Now a days it is important to protect users privacy and utilization of social network data are challenging. Most of developer developed privacy models such as K-anonymity for protecting node or vertex reidentification in structure information. Users privacy models get forced by other user, if a group of node largely share the same sensitive labels then other users easily find out one’s data ,so that structure anonymization method is not purely protected. There are some previous approaches such as edge editing or node clustering .Here structural information as well as sensitive labels of individuals get considered using K-degree l-deversityanonymity model. The new approach in anonymization methodology is adding noise nodes. By considering the least distortion to graph properties,the development of new algorithm using noise nodes into original graph. Most important it will provide an analysis of no.of noise nodes added and their impact on important graph property
Structural Mutual Information and Its Application
Shannon mutual information is an effective method to analyze the information interaction in a point-to-point communication system. However, it cannot solve the problem of channel capacity in graph structure communication system. This problem make it impossible to use traditional mutual information (TMI) to detect the real information and to measure the information embedded in the graph structure. Therefore, measuring the interaction of graph structure and the degree of privacy leakage has become an emerging and challenging issue to be considered. To solve this issue, we propose a novel structural mutual information (SMI) theory based on structure entropy model and the Shannon mutual information theorem, following by the algorithms for solving SMI. The SMI is used to detect the real network structure and measure the degree of private data leakage in the graph structure. Our work expands the channel capacity of Shannon’s second theorem in graph structure, discusses the correlation properties between SMI and TMI, and concludes that SMI satisfies some basic properties, including symmetry, non-negativity, and so on. Finally, theoretical analysis and example demonstration show that the SMI theory is more effective than the traditional privacy measurement methods to measure the information amount embedded in the graph structure and the overall degree of privacy leakage. It provides feasible theoretical support for the privacy protection technology in the graph structure
- …