49 research outputs found

    Reducing Noise Level in Differential Privacy through Matrix Masking

    Full text link
    Differential privacy schemes have been widely adopted in recent years to address issues of data privacy protection. We propose a new Gaussian scheme combining with another data protection technique, called random orthogonal matrix masking, to achieve (ε,δ)(\varepsilon, \delta)-differential privacy (DP) more efficiently. We prove that the additional matrix masking significantly reduces the rate of noise variance required in the Gaussian scheme to achieve (ε,δ)−(\varepsilon, \delta)-DP in big data setting. Specifically, when ε→0\varepsilon \to 0, δ→0\delta \to 0, and the sample size nn exceeds the number pp of attributes by (n−p)=O(ln(1/δ))(n-p)=O(ln(1/\delta)), the required additive noise variance to achieve (ε,δ)(\varepsilon, \delta)-DP is reduced from O(ln(1/δ)/ε2)O(ln(1/\delta)/\varepsilon^2) to O(1/ε)O(1/\varepsilon). With much less noise added, the resulting differential privacy protected pseudo data sets allow much more accurate inferences, thus can significantly improve the scope of application for differential privacy.Comment: 31 page
    corecore