6,032 research outputs found

    PHDP: Preserving Persistent Homology in Differentially Private Graph Publications

    Get PDF
    Online social networks (OSNs) routinely share and analyze user data. This requires protection of sensitive user information. Researchers have proposed several techniques to anonymize the data of OSNs. Some differential-privacy techniques claim to preserve graph utility under certain graph metrics, as well as guarantee strict privacy. However, each graph utility metric reveals the whole graph in specific aspects.We employ persistent homology to give a comprehensive description of the graph utility in OSNs. This paper proposes a novel anonymization scheme, called PHDP, which preserves persistent homology and satisfies differential privacy. To strengthen privacy protection, we add exponential noise to the adjacency matrix of the network and find the number of adding/deleting edges. To maintain persistent homology, we collect edges along persistent structures and avoid perturbation on these edges. Our regeneration algorithms balance persistent homology with differential privacy, publishing an anonymized graph with a guarantee of both. Evaluation result show that the PHDP-anonymized graph achieves high graph utility, both in graph metrics and application metrics

    Data Security and Anonymization in Neighborhood Attacks in Clustered Network in Internet of Things (NIoT)

    Get PDF
    In this paper author tries to focus on the review on the K Nearest Neighbor (KNN) tied by one or more specific types of inter dependency, such as values, visions, ideas, financial exchange, friendship, conflict, or trade. Social network analysis views social relationships in terms of nodes and ties. It also focuses the network analysis, application as well as problem statement. In this paper presents a outline for the privacy hazard and sharing the anonymized data in the network. This includes a proposed architecture design flow, for which the author considers the several variations and make connections. On several real-world social networks, we show that simple anonymization techniques are inadequate, it results in considerable breaks of privacy for even modestly informed opponents.  It also concentrates on a new anonymization technique. It based on the network and validate analytically that leads to saving of the privacy threat. It also analyses the effect that anonymizing the network has on the utility of the data for social network analysis

    Quantification of De-anonymization Risks in Social Networks

    Full text link
    The risks of publishing privacy-sensitive data have received considerable attention recently. Several de-anonymization attacks have been proposed to re-identify individuals even if data anonymization techniques were applied. However, there is no theoretical quantification for relating the data utility that is preserved by the anonymization techniques and the data vulnerability against de-anonymization attacks. In this paper, we theoretically analyze the de-anonymization attacks and provide conditions on the utility of the anonymized data (denoted by anonymized utility) to achieve successful de-anonymization. To the best of our knowledge, this is the first work on quantifying the relationships between anonymized utility and de-anonymization capability. Unlike previous work, our quantification analysis requires no assumptions about the graph model, thus providing a general theoretical guide for developing practical de-anonymization/anonymization techniques. Furthermore, we evaluate state-of-the-art de-anonymization attacks on a real-world Facebook dataset to show the limitations of previous work. By comparing these experimental results and the theoretically achievable de-anonymization capability derived in our analysis, we further demonstrate the ineffectiveness of previous de-anonymization attacks and the potential of more powerful de-anonymization attacks in the future.Comment: Published in International Conference on Information Systems Security and Privacy, 201

    Using Metrics Suites to Improve the Measurement of Privacy in Graphs

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Social graphs are widely used in research (e.g., epidemiology) and business (e.g., recommender systems). However, sharing these graphs poses privacy risks because they contain sensitive information about individuals. Graph anonymization techniques aim to protect individual users in a graph, while graph de-anonymization aims to re-identify users. The effectiveness of anonymization and de-anonymization algorithms is usually evaluated with privacy metrics. However, it is unclear how strong existing privacy metrics are when they are used in graph privacy. In this paper, we study 26 privacy metrics for graph anonymization and de-anonymization and evaluate their strength in terms of three criteria: monotonicity indicates whether the metric indicates lower privacy for stronger adversaries; for within-scenario comparisons, evenness indicates whether metric values are spread evenly; and for between-scenario comparisons, shared value range indicates whether metrics use a consistent value range across scenarios. Our extensive experiments indicate that no single metric fulfills all three criteria perfectly. We therefore use methods from multi-criteria decision analysis to aggregate multiple metrics in a metrics suite, and we show that these metrics suites improve monotonicity compared to the best individual metric. This important result enables more monotonic, and thus more accurate, evaluations of new graph anonymization and de-anonymization algorithms

    An Automated Social Graph De-anonymization Technique

    Full text link
    We present a generic and automated approach to re-identifying nodes in anonymized social networks which enables novel anonymization techniques to be quickly evaluated. It uses machine learning (decision forests) to matching pairs of nodes in disparate anonymized sub-graphs. The technique uncovers artefacts and invariants of any black-box anonymization scheme from a small set of examples. Despite a high degree of automation, classification succeeds with significant true positive rates even when small false positive rates are sought. Our evaluation uses publicly available real world datasets to study the performance of our approach against real-world anonymization strategies, namely the schemes used to protect datasets of The Data for Development (D4D) Challenge. We show that the technique is effective even when only small numbers of samples are used for training. Further, since it detects weaknesses in the black-box anonymization scheme it can re-identify nodes in one social network when trained on another.Comment: 12 page

    A Novel Graph-modification Technique for User Privacy-preserving on Social Networks, Journal of Telecommunications and Information Technology, 2019, nr 3

    Get PDF
    The growing popularity of social networks and the increasing need for publishing related data mean that protection of privacy becomes an important and challenging problem in social networks. This paper describes the (k,l k,l k,l)-anonymity model used for social network graph anonymization. The method is based on edge addition and is utility-aware, i.e. it is designed to generate a graph that is similar to the original one. Different strategies are evaluated to this end and the results are compared based on common utility metrics. The outputs confirm that the na¨ıve idea of adding some random or even minimum number of possible edges does not always produce useful anonymized social network graphs, thus creating some interesting alternatives for graph anonymization techniques
    corecore