304,794 research outputs found
Trust-based model for privacy control in context aware systems
In context-aware systems, there is a high demand on providing privacy solutions to users when they are interacting and exchanging personal information. Privacy in this context encompasses reasoning about trust and risk involved in interactions between users. Trust, therefore, controls the amount of information that can be revealed, and risk analysis allows us to evaluate the expected benefit that would motivate users to participate in these interactions. In this paper, we propose a trust-based model for privacy control in context-aware systems based on incorporating trust and risk. Through this approach, it is clear how to reason about trust and risk in designing and implementing context-aware systems that provide mechanisms to protect users' privacy. Our approach also includes experiential learning mechanisms from past observations in reaching better decisions in future interactions. The outlined model in this paper serves as an attempt to solve the concerns of privacy control in context-aware systems. To validate this model, we are currently applying it on a context-aware system that tracks users' location. We hope to report on the performance evaluation and the experience of implementation in the near future
Context-Aware Generative Adversarial Privacy
Preserving the utility of published datasets while simultaneously providing
provable privacy guarantees is a well-known challenge. On the one hand,
context-free privacy solutions, such as differential privacy, provide strong
privacy guarantees, but often lead to a significant reduction in utility. On
the other hand, context-aware privacy solutions, such as information theoretic
privacy, achieve an improved privacy-utility tradeoff, but assume that the data
holder has access to dataset statistics. We circumvent these limitations by
introducing a novel context-aware privacy framework called generative
adversarial privacy (GAP). GAP leverages recent advancements in generative
adversarial networks (GANs) to allow the data holder to learn privatization
schemes from the dataset itself. Under GAP, learning the privacy mechanism is
formulated as a constrained minimax game between two players: a privatizer that
sanitizes the dataset in a way that limits the risk of inference attacks on the
individuals' private variables, and an adversary that tries to infer the
private variables from the sanitized dataset. To evaluate GAP's performance, we
investigate two simple (yet canonical) statistical dataset models: (a) the
binary data model, and (b) the binary Gaussian mixture model. For both models,
we derive game-theoretically optimal minimax privacy mechanisms, and show that
the privacy mechanisms learned from data (in a generative adversarial fashion)
match the theoretically optimal ones. This demonstrates that our framework can
be easily applied in practice, even in the absence of dataset statistics.Comment: Improved version of a paper accepted by Entropy Journal, Special
Issue on Information Theory in Machine Learning and Data Scienc
Decentralized Differentially Private Without-Replacement Stochastic Gradient Descent
While machine learning has achieved remarkable results in a wide variety of
domains, the training of models often requires large datasets that may need to
be collected from different individuals. As sensitive information may be
contained in the individual's dataset, sharing training data may lead to severe
privacy concerns. Therefore, there is a compelling need to develop
privacy-aware machine learning methods, for which one effective approach is to
leverage the generic framework of differential privacy. Considering that
stochastic gradient descent (SGD) is one of the mostly adopted methods for
large-scale machine learning problems, two decentralized differentially private
SGD algorithms are proposed in this work. Particularly, we focus on SGD without
replacement due to its favorable structure for practical implementation. In
addition, both privacy and convergence analysis are provided for the proposed
algorithms. Finally, extensive experiments are performed to verify the
theoretical results and demonstrate the effectiveness of the proposed
algorithms
Differentially Private Sharpness-Aware Training
Training deep learning models with differential privacy (DP) results in a
degradation of performance. The training dynamics of models with DP show a
significant difference from standard training, whereas understanding the
geometric properties of private learning remains largely unexplored. In this
paper, we investigate sharpness, a key factor in achieving better
generalization, in private learning. We show that flat minima can help reduce
the negative effects of per-example gradient clipping and the addition of
Gaussian noise. We then verify the effectiveness of Sharpness-Aware
Minimization (SAM) for seeking flat minima in private learning. However, we
also discover that SAM is detrimental to the privacy budget and computational
time due to its two-step optimization. Thus, we propose a new sharpness-aware
training method that mitigates the privacy-optimization trade-off. Our
experimental results demonstrate that the proposed method improves the
performance of deep learning models with DP from both scratch and fine-tuning.
Code is available at https://github.com/jinseongP/DPSAT.Comment: ICML 202
- …