4 research outputs found

    On the Relationship Between Information-Theoretic Privacy Metrics And Probabilistic Information Privacy

    Full text link
    Information-theoretic (IT) measures based on ff-divergences have recently gained interest as a measure of privacy leakage as they allow for trading off privacy against utility using only a single-value characterization. However, their operational interpretations in the privacy context are unclear. In this paper, we relate the notion of probabilistic information privacy (IP) to several IT privacy metrics based on ff-divergences. We interpret probabilistic IP under both the detection and estimation frameworks and link it to differential privacy, thus allowing a precise operational interpretation of these IT privacy metrics. We show that the χ2\chi^2-divergence privacy metric is stronger than those based on total variation distance and Kullback-Leibler divergence. Therefore, we further develop a data-driven empirical risk framework based on the χ2\chi^2-divergence privacy metric and realized using deep neural networks. This framework is agnostic to the adversarial attack model. Empirical experiments demonstrate the efficacy of our approach

    Decentralized detection with robust information privacy protection

    No full text
    We consider a decentralized detection network whose aim is to infer a public hypothesis of interest. However, the raw sensor observations also allow the fusion center to infer private hypotheses that we wish to protect. We consider the case where there are an uncountable number of private hypotheses belonging to an uncertainty set, and develop local privacy mappings at every sensor so that the sanitized sensor information minimizes the Bayes error of detecting the public hypothesis at the fusion center while achieving information privacy for all private hypotheses. We introduce the concept of a most favorable hypothesis (MFH) and show how to find an MFH in the set of private hypotheses. By protecting the information privacy of the MFH, information privacy for every other private hypothesis is also achieved. We provide an iterative algorithm to find the optimal local privacy mappings, and derive some theoretical properties of these privacy mappings. The simulation results demonstrate that our proposed approach allows the fusion center to infer the public hypothesis with low error while protecting information privacy of all the private hypotheses.Economic Development Board (EDB)Ministry of Education (MOE)This work was supported in part by the Singapore Ministry of Education Academic Research Fund Tier 1 under Grant 2017-T1-001-059 (RG20/17), in part by the Singapore Ministry of Education Academic Research Fund Tier 2 under Grant MOE2018-T2-2- 019, and in part by the NTU-NXP Intelligent Transport System Test-Bed Living Lab Fund from the Economic Development Board, Singapore, under Grant S15-1105-RF-LLF. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Julien Bringer
    corecore