1,560 research outputs found
Context-Aware Generative Adversarial Privacy
Preserving the utility of published datasets while simultaneously providing
provable privacy guarantees is a well-known challenge. On the one hand,
context-free privacy solutions, such as differential privacy, provide strong
privacy guarantees, but often lead to a significant reduction in utility. On
the other hand, context-aware privacy solutions, such as information theoretic
privacy, achieve an improved privacy-utility tradeoff, but assume that the data
holder has access to dataset statistics. We circumvent these limitations by
introducing a novel context-aware privacy framework called generative
adversarial privacy (GAP). GAP leverages recent advancements in generative
adversarial networks (GANs) to allow the data holder to learn privatization
schemes from the dataset itself. Under GAP, learning the privacy mechanism is
formulated as a constrained minimax game between two players: a privatizer that
sanitizes the dataset in a way that limits the risk of inference attacks on the
individuals' private variables, and an adversary that tries to infer the
private variables from the sanitized dataset. To evaluate GAP's performance, we
investigate two simple (yet canonical) statistical dataset models: (a) the
binary data model, and (b) the binary Gaussian mixture model. For both models,
we derive game-theoretically optimal minimax privacy mechanisms, and show that
the privacy mechanisms learned from data (in a generative adversarial fashion)
match the theoretically optimal ones. This demonstrates that our framework can
be easily applied in practice, even in the absence of dataset statistics.Comment: Improved version of a paper accepted by Entropy Journal, Special
Issue on Information Theory in Machine Learning and Data Scienc
Accelerated Primal-dual Scheme for a Class of Stochastic Nonconvex-concave Saddle Point Problems
Stochastic nonconvex-concave min-max saddle point problems appear in many
machine learning and control problems including distributionally robust
optimization, generative adversarial networks, and adversarial learning. In
this paper, we consider a class of nonconvex saddle point problems where the
objective function satisfies the Polyak-{\L}ojasiewicz condition with respect
to the minimization variable and it is concave with respect to the maximization
variable. The existing methods for solving nonconvex-concave saddle point
problems often suffer from slow convergence and/or contain multiple loops. Our
main contribution lies in proposing a novel single-loop accelerated primal-dual
algorithm with new convergence rate results appearing for the first time in the
literature, to the best of our knowledge. In particular, in the stochastic
regime, we demonstrate a convergence rate of to
find an -gap solution which can be improved to in deterministic setting
- …