5 research outputs found

    Distributed privacy-preserving methods for statistical disclosure control

    No full text
    Statistical disclosure control (SDC) methods aim to protect privacy of the confidential information included in some databases, for example by perturbing the non-confidential parts of the original databases. Such methods are commonly used by statistical agencies before publishing the perturbed data, which must ensure privacy at the same time as it preserves as much as possible the statistical information of the original data. In this paper we consider the problem of designing distributed privacy-preserving versions of these perturbation methods: each part of the original database is owned by a different entity, and they want to jointly compute the perturbed version of the global database, without leaking any sensitive information on their individual parts of the original data. We show that some perturbation methods do not allow a private distributed extension, whereas other methods do. Among the methods that allow a distributed privacy-preserving version, we can list noise addition, resampling and a new protection method, rank shuffling, which is described and analyzed here for the first time.Postprint (published version

    Distributed privacy-preserving methods for statistical disclosure control

    No full text
    Statistical disclosure control (SDC) methods aim to protect privacy of the confidential information included in some databases, for example by perturbing the non-confidential parts of the original databases. Such methods are commonly used by statistical agencies before publishing the perturbed data, which must ensure privacy at the same time as it preserves as much as possible the statistical information of the original data. In this paper we consider the problem of designing distributed privacy-preserving versions of these perturbation methods: each part of the original database is owned by a different entity, and they want to jointly compute the perturbed version of the global database, without leaking any sensitive information on their individual parts of the original data. We show that some perturbation methods do not allow a private distributed extension, whereas other methods do. Among the methods that allow a distributed privacy-preserving version, we can list noise addition, resampling and a new protection method, rank shuffling, which is described and analyzed here for the first time

    Distributed privacy-preserving methods for statistical disclosure control

    No full text
    Statistical disclosure control (SDC) methods aim to protect privacy of the confidential information included in some databases, for example by perturbing the non-confidential parts of the original databases. Such methods are commonly used by statistical agencies before publishing the perturbed data, which must ensure privacy at the same time as it preserves as much as possible the statistical information of the original data. In this paper we consider the problem of designing distributed privacy-preserving versions of these perturbation methods: each part of the original database is owned by a different entity, and they want to jointly compute the perturbed version of the global database, without leaking any sensitive information on their individual parts of the original data. We show that some perturbation methods do not allow a private distributed extension, whereas other methods do. Among the methods that allow a distributed privacy-preserving version, we can list noise addition, resampling and a new protection method, rank shuffling, which is described and analyzed here for the first time
    corecore