3 research outputs found

    Probabilistic Inference Protection on Anonymized Data

    No full text
    Abstract—Background knowledge is an important factor in privacy preserving data publishing. Probabilistic distributionbased background knowledge is a powerful kind of background knowledge which is easily accessible to adversaries. However, to the best of our knowledge, there is no existing work that can provide a privacy guarantee under adversary attack with such background knowledge. The difficulty of the problem lies in the high complexity of the probability computation and the non-monotone nature of the privacy condition. The only solution known to us relies on approximate algorithms with no known error bound. In this paper, we propose an algorithm that overcomes the difficulties of the problem by introducing a bounding condition on probability deviations in the anonymized data groups, which is much easier to compute and which is a monotone function on the grouping sizes. This bounding condition is also in harmony with the utility preservation objective. Our empirical studies show that our method preserves data utilities at a higher or comparable level when compared with some state-of-the-art algorithms that provide less protection. I
    corecore