55 research outputs found

    Injecting Uncertainty in Graphs for Identity Obfuscation

    Full text link
    Data collected nowadays by social-networking applications create fascinating opportunities for building novel services, as well as expanding our understanding about social structures and their dynamics. Unfortunately, publishing social-network graphs is considered an ill-advised practice due to privacy concerns. To alleviate this problem, several anonymization methods have been proposed, aiming at reducing the risk of a privacy breach on the published data, while still allowing to analyze them and draw relevant conclusions. In this paper we introduce a new anonymization approach that is based on injecting uncertainty in social graphs and publishing the resulting uncertain graphs. While existing approaches obfuscate graph data by adding or removing edges entirely, we propose using a finer-grained perturbation that adds or removes edges partially: this way we can achieve the same desired level of obfuscation with smaller changes in the data, thus maintaining higher utility. Our experiments on real-world networks confirm that at the same level of identity obfuscation our method provides higher usefulness than existing randomized methods that publish standard graphs.Comment: VLDB201

    CryptGraph: Privacy Preserving Graph Analytics on Encrypted Graph

    Full text link
    Many graph mining and analysis services have been deployed on the cloud, which can alleviate users from the burden of implementing and maintaining graph algorithms. However, putting graph analytics on the cloud can invade users' privacy. To solve this problem, we propose CryptGraph, which runs graph analytics on encrypted graph to preserve the privacy of both users' graph data and the analytic results. In CryptGraph, users encrypt their graphs before uploading them to the cloud. The cloud runs graph analysis on the encrypted graphs and obtains results which are also in encrypted form that the cloud cannot decipher. During the process of computing, the encrypted graphs are never decrypted on the cloud side. The encrypted results are sent back to users and users perform the decryption to obtain the plaintext results. In this process, users' graphs and the analytics results are both encrypted and the cloud knows neither of them. Thereby, users' privacy can be strongly protected. Meanwhile, with the help of homomorphic encryption, the results analyzed from the encrypted graphs are guaranteed to be correct. In this paper, we present how to encrypt a graph using homomorphic encryption and how to query the structure of an encrypted graph by computing polynomials. To solve the problem that certain operations are not executable on encrypted graphs, we propose hard computation outsourcing to seek help from users. Using two graph algorithms as examples, we show how to apply our methods to perform analytics on encrypted graphs. Experiments on two datasets demonstrate the correctness and feasibility of our methods

    Researching Big Data Research: Ethical Implications for IS Scholars

    Get PDF
    This ERF (Emerging Research Forum) paper focuses on the ethical implications of IS academic big data research. We explore how big data research raises concerns about privacy, human subjects protection and research integrity that are not yet adequately addressed by law, regulation, or the norms of acceptable research conduct. The objective is to increase awareness and promote constructive debate, with the ultimate goal of developing consensus in the field about appropriate research data use practices

    Quantification of De-anonymization Risks in Social Networks

    Full text link
    The risks of publishing privacy-sensitive data have received considerable attention recently. Several de-anonymization attacks have been proposed to re-identify individuals even if data anonymization techniques were applied. However, there is no theoretical quantification for relating the data utility that is preserved by the anonymization techniques and the data vulnerability against de-anonymization attacks. In this paper, we theoretically analyze the de-anonymization attacks and provide conditions on the utility of the anonymized data (denoted by anonymized utility) to achieve successful de-anonymization. To the best of our knowledge, this is the first work on quantifying the relationships between anonymized utility and de-anonymization capability. Unlike previous work, our quantification analysis requires no assumptions about the graph model, thus providing a general theoretical guide for developing practical de-anonymization/anonymization techniques. Furthermore, we evaluate state-of-the-art de-anonymization attacks on a real-world Facebook dataset to show the limitations of previous work. By comparing these experimental results and the theoretically achievable de-anonymization capability derived in our analysis, we further demonstrate the ineffectiveness of previous de-anonymization attacks and the potential of more powerful de-anonymization attacks in the future.Comment: Published in International Conference on Information Systems Security and Privacy, 201

    A Framework for Enabling Privacy Preserving Analysis of Graph Properties in Distributed Graphs

    Get PDF
    In the real world, many phenomena can be naturally modeled as a graph whose nodes represent entities and whose edges represent interactions or relationships between the entities. Past and ongoing research on graphs has developed concepts and theories that may deepen the understanding of the graph data and facilitate solving many problems of practical interest represented by graphs. However, little of this work takes privacy concerns into account. This paper contributes to privacy preserving graph analysis research by proposing a framework for enabling privacy preserving analysis of graph properties in distributed graphs. The framework is composed of three modules. We discuss the functionality of each module and describe how the modules together ensure the privacy protection while retaining graph properties and answer users’ queries pertaining to graph properties

    De-anonymyzing scale-free social networks by using spectrum partitioning method

    Get PDF
    Social network data is widely shared, forwarded and published to third parties, which led to the risks of privacy disclosure. Even thought the network provider always perturbs the data before publishing it, attackers can still recover anonymous data according to the collected auxiliary information. In this paper, we transform the problem of de-anonymization into node matching problem in graph, and the de-anonymization method can reduce the number of nodes to be matched at each time. In addition, we use spectrum partitioning method to divide the social graph into disjoint subgraphs, and it can effectively be applied to large-scale social networks and executed in parallel by using multiple processors. Through the analysis of the influence of power-law distribution on de-anonymization, we synthetically consider the structural and personal information of users which made the feature information of the user more practical

    Link Prediction by De-anonymization: How We Won the Kaggle Social Network Challenge

    Full text link
    This paper describes the winning entry to the IJCNN 2011 Social Network Challenge run by Kaggle.com. The goal of the contest was to promote research on real-world link prediction, and the dataset was a graph obtained by crawling the popular Flickr social photo sharing website, with user identities scrubbed. By de-anonymizing much of the competition test set using our own Flickr crawl, we were able to effectively game the competition. Our attack represents a new application of de-anonymization to gaming machine learning contests, suggesting changes in how future competitions should be run. We introduce a new simulated annealing-based weighted graph matching algorithm for the seeding step of de-anonymization. We also show how to combine de-anonymization with link prediction---the latter is required to achieve good performance on the portion of the test set not de-anonymized---for example by training the predictor on the de-anonymized portion of the test set, and combining probabilistic predictions from de-anonymization and link prediction.Comment: 11 pages, 13 figures; submitted to IJCNN'201

    Anonymizing Social Graphs via Uncertainty Semantics

    Full text link
    Rather than anonymizing social graphs by generalizing them to super nodes/edges or adding/removing nodes and edges to satisfy given privacy parameters, recent methods exploit the semantics of uncertain graphs to achieve privacy protection of participating entities and their relationship. These techniques anonymize a deterministic graph by converting it into an uncertain form. In this paper, we propose a generalized obfuscation model based on uncertain adjacency matrices that keep expected node degrees equal to those in the unanonymized graph. We analyze two recently proposed schemes and show their fitting into the model. We also point out disadvantages in each method and present several elegant techniques to fill the gap between them. Finally, to support fair comparisons, we develop a new tradeoff quantifying framework by leveraging the concept of incorrectness in location privacy research. Experiments on large social graphs demonstrate the effectiveness of our schemes
    • …
    corecore