20,200 research outputs found

    Quantification of De-anonymization Risks in Social Networks

    Full text link
    The risks of publishing privacy-sensitive data have received considerable attention recently. Several de-anonymization attacks have been proposed to re-identify individuals even if data anonymization techniques were applied. However, there is no theoretical quantification for relating the data utility that is preserved by the anonymization techniques and the data vulnerability against de-anonymization attacks. In this paper, we theoretically analyze the de-anonymization attacks and provide conditions on the utility of the anonymized data (denoted by anonymized utility) to achieve successful de-anonymization. To the best of our knowledge, this is the first work on quantifying the relationships between anonymized utility and de-anonymization capability. Unlike previous work, our quantification analysis requires no assumptions about the graph model, thus providing a general theoretical guide for developing practical de-anonymization/anonymization techniques. Furthermore, we evaluate state-of-the-art de-anonymization attacks on a real-world Facebook dataset to show the limitations of previous work. By comparing these experimental results and the theoretically achievable de-anonymization capability derived in our analysis, we further demonstrate the ineffectiveness of previous de-anonymization attacks and the potential of more powerful de-anonymization attacks in the future.Comment: Published in International Conference on Information Systems Security and Privacy, 201

    Using Metrics Suites to Improve the Measurement of Privacy in Graphs

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Social graphs are widely used in research (e.g., epidemiology) and business (e.g., recommender systems). However, sharing these graphs poses privacy risks because they contain sensitive information about individuals. Graph anonymization techniques aim to protect individual users in a graph, while graph de-anonymization aims to re-identify users. The effectiveness of anonymization and de-anonymization algorithms is usually evaluated with privacy metrics. However, it is unclear how strong existing privacy metrics are when they are used in graph privacy. In this paper, we study 26 privacy metrics for graph anonymization and de-anonymization and evaluate their strength in terms of three criteria: monotonicity indicates whether the metric indicates lower privacy for stronger adversaries; for within-scenario comparisons, evenness indicates whether metric values are spread evenly; and for between-scenario comparisons, shared value range indicates whether metrics use a consistent value range across scenarios. Our extensive experiments indicate that no single metric fulfills all three criteria perfectly. We therefore use methods from multi-criteria decision analysis to aggregate multiple metrics in a metrics suite, and we show that these metrics suites improve monotonicity compared to the best individual metric. This important result enables more monotonic, and thus more accurate, evaluations of new graph anonymization and de-anonymization algorithms

    FLAIM: A Multi-level Anonymization Framework for Computer and Network Logs

    Full text link
    FLAIM (Framework for Log Anonymization and Information Management) addresses two important needs not well addressed by current log anonymizers. First, it is extremely modular and not tied to the specific log being anonymized. Second, it supports multi-level anonymization, allowing system administrators to make fine-grained trade-offs between information loss and privacy/security concerns. In this paper, we examine anonymization solutions to date and note the above limitations in each. We further describe how FLAIM addresses these problems, and we describe FLAIM's architecture and features in detail.Comment: 16 pages, 4 figures, in submission to USENIX Lis

    An Automated Social Graph De-anonymization Technique

    Full text link
    We present a generic and automated approach to re-identifying nodes in anonymized social networks which enables novel anonymization techniques to be quickly evaluated. It uses machine learning (decision forests) to matching pairs of nodes in disparate anonymized sub-graphs. The technique uncovers artefacts and invariants of any black-box anonymization scheme from a small set of examples. Despite a high degree of automation, classification succeeds with significant true positive rates even when small false positive rates are sought. Our evaluation uses publicly available real world datasets to study the performance of our approach against real-world anonymization strategies, namely the schemes used to protect datasets of The Data for Development (D4D) Challenge. We show that the technique is effective even when only small numbers of samples are used for training. Further, since it detects weaknesses in the black-box anonymization scheme it can re-identify nodes in one social network when trained on another.Comment: 12 page

    Standardization of guidelines for patient photograph deidentification

    Full text link
    IMPORTANCE: This work was performed to advance patient care by protecting patient anonymity. OBJECTIVES: This study aimed to analyze the current practices used in patient facial photograph deidentification and set forth standardized guidelines for improving patient autonomy that are congruent with medical ethics and Health Insurance Portability and Accountability Act. DESIGN: The anonymization guidelines of 13 respected journals were reviewed for adequacy in accordance to facial recognition literature. Simple statistics were used to compare the usage of the most common concealment techniques in 8 medical journals which may publish the most facial photographs. SETTING: Not applicable. PARTICIPANTS: Not applicable. MAIN OUTCOME MEASURES: Facial photo deidentification guidelines of 13 journals were ascertained. Number and percentage of patient photographs lacking adequate anonymization in 8 journals were determined. RESULTS: Facial image anonymization guidelines varied across journals. When anonymization was attempted, 87% of the images were inadequately concealed. The most common technique used was masking the eyes alone with a black box. CONCLUSIONS: Most journals evaluated lack specific instructions for properly de-identifying facial photographs. The guidelines introduced here stress that both eyebrows and eyes must be concealed to ensure patient privacy. Examples of proper and inadequate photo anonymization techniques are provided. RELEVANCE: Improving patient care by ensuring greater patient anonymity
    • …
    corecore