3 research outputs found

    Quantifying Privacy: A Novel Entropy-Based Measure of Disclosure Risk

    Full text link
    It is well recognised that data mining and statistical analysis pose a serious treat to privacy. This is true for financial, medical, criminal and marketing research. Numerous techniques have been proposed to protect privacy, including restriction and data modification. Recently proposed privacy models such as differential privacy and k-anonymity received a lot of attention and for the latter there are now several improvements of the original scheme, each removing some security shortcomings of the previous one. However, the challenge lies in evaluating and comparing privacy provided by various techniques. In this paper we propose a novel entropy based security measure that can be applied to any generalisation, restriction or data modification technique. We use our measure to empirically evaluate and compare a few popular methods, namely query restriction, sampling and noise addition.Comment: 20 pages, 4 figure

    Triangle randomization for social network data anonymization

    No full text
    In order to protect privacy of social network participants, network graph data should be anonymised prior to its release. Most proposals in the literature aim to achieve k-anonymity under specific assumptions about the background information available to the attacker. Our method is based on randomizing the location of the triangles in the graph. We show that this simple method preserves the main structural parameters of the graph to a high extent, while providing a high re-identification confusion
    corecore