14,722 research outputs found
Privacy In Multi-Agent And Dynamical Systems
The use of private data is pivotal for numerous services including location--based ones, collaborative recommender systems, and social networks. Despite the utility these services provide, the usage of private data raises privacy concerns to their owners. Noise--injecting techniques, such as differential privacy, address these concerns by adding artificial noise such that an adversary with access to the published response cannot confidently infer the private data. Particularly, in multi--agent and dynamical environments, privacy--preserving techniques need to be expressive enough to capture time--varying privacy needs, multiple data owners, and multiple data users. Current work in differential privacy assumes that a single response gets published and a single predefined privacy guarantee is provided. This work relaxes these assumptions by providing several problem formulations and their approaches. In the setting of a social network, a data owner has different privacy needs against different users. We design a coalition--free privacy--preserving mechanism that allows a data owner to diffuse their private data over a network. We also formulate the problem of multiple data owners that provide their data to multiple data users. Also, for time--varying privacy needs, we prove that, for a class of existing privacy--preserving mechanism, it is possible to effectively relax privacy constraints gradually. Additionally, we provide a privacy--aware mechanism for time--varying private data, where we wish to protect only the current value of it. Finally, in the context of location--based services, we provide a mechanism where the strength of the privacy guarantees varies with the local population density. These contributions increase the applicability of differential privacy and set future directions for more flexible and expressive privacy guarantees
On the Measurement of Privacy as an Attacker's Estimation Error
A wide variety of privacy metrics have been proposed in the literature to
evaluate the level of protection offered by privacy enhancing-technologies.
Most of these metrics are specific to concrete systems and adversarial models,
and are difficult to generalize or translate to other contexts. Furthermore, a
better understanding of the relationships between the different privacy metrics
is needed to enable more grounded and systematic approach to measuring privacy,
as well as to assist systems designers in selecting the most appropriate metric
for a given application.
In this work we propose a theoretical framework for privacy-preserving
systems, endowed with a general definition of privacy in terms of the
estimation error incurred by an attacker who aims to disclose the private
information that the system is designed to conceal. We show that our framework
permits interpreting and comparing a number of well-known metrics under a
common perspective. The arguments behind these interpretations are based on
fundamental results related to the theories of information, probability and
Bayes decision.Comment: This paper has 18 pages and 17 figure
DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep neural networks
Recent deep learning models have shown remarkable performance in image
classification. While these deep learning systems are getting closer to
practical deployment, the common assumption made about data is that it does not
carry any sensitive information. This assumption may not hold for many
practical cases, especially in the domain where an individual's personal
information is involved, like healthcare and facial recognition systems. We
posit that selectively removing features in this latent space can protect the
sensitive information and provide a better privacy-utility trade-off.
Consequently, we propose DISCO which learns a dynamic and data driven pruning
filter to selectively obfuscate sensitive information in the feature space. We
propose diverse attack schemes for sensitive inputs \& attributes and
demonstrate the effectiveness of DISCO against state-of-the-art methods through
quantitative and qualitative evaluation. Finally, we also release an evaluation
benchmark dataset of 1 million sensitive representations to encourage rigorous
exploration of novel attack schemes.Comment: Presented at CVPR 202
- …