723 research outputs found
Recommended from our members
New Program Abstractions for Privacy
Static program analysis, once seen primarily as a tool for optimising programs, is now increasingly important as a means to provide quality guarantees about programs. One measure of quality is the extent to which programs respect the privacy of user data. Differential privacy is a rigorous quantified definition of privacy which guarantees a bound on the loss of privacy due to the release of statistical queries. Among the benefits enjoyed by the definition of differential privacy are compositionality properties that allow differentially private analyses to be built from pieces and combined in various ways. This has led to the development of frameworks for the construction of differentially private program analyses which are private-by-construction. Past frameworks assume that the sensitive data is collected centrally, and processed by a trusted curator. However, the main examples of differential privacy applied in practice - for example in the use of differential privacy in Google Chrome’s collection of browsing statistics, or Apple’s training of predictive messaging in iOS 10 -use a purely local mechanism applied at the data source, thus avoiding the collection of sensitive data altogether. While this is a benefit of the local approach, with systems like Apple’s, users are required to completely trust that the analysis running on their system has the claimed privacy properties.
In this position paper we outline some key challenges in developing static analyses for analysing differential privacy, and propose novel abstractions for describing the behaviour of probabilistic programs not previously used in static analyses
Privacy and Truthful Equilibrium Selection for Aggregative Games
We study a very general class of games --- multi-dimensional aggregative
games --- which in particular generalize both anonymous games and weighted
congestion games. For any such game that is also large, we solve the
equilibrium selection problem in a strong sense. In particular, we give an
efficient weak mediator: a mechanism which has only the power to listen to
reported types and provide non-binding suggested actions, such that (a) it is
an asymptotic Nash equilibrium for every player to truthfully report their type
to the mediator, and then follow its suggested action; and (b) that when
players do so, they end up coordinating on a particular asymptotic pure
strategy Nash equilibrium of the induced complete information game. In fact,
truthful reporting is an ex-post Nash equilibrium of the mediated game, so our
solution applies even in settings of incomplete information, and even when
player types are arbitrary or worst-case (i.e. not drawn from a common prior).
We achieve this by giving an efficient differentially private algorithm for
computing a Nash equilibrium in such games. The rates of convergence to
equilibrium in all of our results are inverse polynomial in the number of
players . We also apply our main results to a multi-dimensional market game.
Our results can be viewed as giving, for a rich class of games, a more robust
version of the Revelation Principle, in that we work with weaker informational
assumptions (no common prior), yet provide a stronger solution concept (ex-post
Nash versus Bayes Nash equilibrium). In comparison to previous work, our main
conceptual contribution is showing that weak mediators are a game theoretic
object that exist in a wide variety of games -- previously, they were only
known to exist in traffic routing games
Private Multiplicative Weights Beyond Linear Queries
A wide variety of fundamental data analyses in machine learning, such as
linear and logistic regression, require minimizing a convex function defined by
the data. Since the data may contain sensitive information about individuals,
and these analyses can leak that sensitive information, it is important to be
able to solve convex minimization in a privacy-preserving way.
A series of recent results show how to accurately solve a single convex
minimization problem in a differentially private manner. However, the same data
is often analyzed repeatedly, and little is known about solving multiple convex
minimization problems with differential privacy. For simpler data analyses,
such as linear queries, there are remarkable differentially private algorithms
such as the private multiplicative weights mechanism (Hardt and Rothblum, FOCS
2010) that accurately answer exponentially many distinct queries. In this work,
we extend these results to the case of convex minimization and show how to give
accurate and differentially private solutions to *exponentially many* convex
minimization problems on a sensitive dataset
Order-Revealing Encryption and the Hardness of Private Learning
An order-revealing encryption scheme gives a public procedure by which two
ciphertexts can be compared to reveal the ordering of their underlying
plaintexts. We show how to use order-revealing encryption to separate
computationally efficient PAC learning from efficient -differentially private PAC learning. That is, we construct a concept
class that is efficiently PAC learnable, but for which every efficient learner
fails to be differentially private. This answers a question of Kasiviswanathan
et al. (FOCS '08, SIAM J. Comput. '11).
To prove our result, we give a generic transformation from an order-revealing
encryption scheme into one with strongly correct comparison, which enables the
consistent comparison of ciphertexts that are not obtained as the valid
encryption of any message. We believe this construction may be of independent
interest.Comment: 28 page
Algorithms For Extracting Timeliness Graphs
We consider asynchronous message-passing systems in which some links are
timely and processes may crash. Each run defines a timeliness graph among
correct processes: (p; q) is an edge of the timeliness graph if the link from p
to q is timely (that is, there is bound on communication delays from p to q).
The main goal of this paper is to approximate this timeliness graph by graphs
having some properties (such as being trees, rings, ...). Given a family S of
graphs, for runs such that the timeliness graph contains at least one graph in
S then using an extraction algorithm, each correct process has to converge to
the same graph in S that is, in a precise sense, an approximation of the
timeliness graph of the run. For example, if the timeliness graph contains a
ring, then using an extraction algorithm, all correct processes eventually
converge to the same ring and in this ring all nodes will be correct processes
and all links will be timely. We first present a general extraction algorithm
and then a more specific extraction algorithm that is communication efficient
(i.e., eventually all the messages of the extraction algorithm use only links
of the extracted graph)
Fair Inputs and Fair Outputs: The Incompatibility of Fairness in Privacy and Accuracy
Fairness concerns about algorithmic decision-making systems have been mainly
focused on the outputs (e.g., the accuracy of a classifier across individuals
or groups). However, one may additionally be concerned with fairness in the
inputs. In this paper, we propose and formulate two properties regarding the
inputs of (features used by) a classifier. In particular, we claim that fair
privacy (whether individuals are all asked to reveal the same information) and
need-to-know (whether users are only asked for the minimal information required
for the task at hand) are desirable properties of a decision system. We explore
the interaction between these properties and fairness in the outputs (fair
prediction accuracy). We show that for an optimal classifier these three
properties are in general incompatible, and we explain what common properties
of data make them incompatible. Finally we provide an algorithm to verify if
the trade-off between the three properties exists in a given dataset, and use
the algorithm to show that this trade-off is common in real data
Quantitative Information Flow and Applications to Differential Privacy
International audienceSecure information flow is the problem of ensuring that the information made publicly available by a computational system does not leak information that should be kept secret. Since it is practically impossible to avoid leakage entirely, in recent years there has been a growing interest in considering the quantitative aspects of information flow, in order to measure and compare the amount of leakage. Information theory is widely regarded as a natural framework to provide firm foundations to quantitative information flow. In this notes we review the two main information-theoretic approaches that have been investigated: the one based on Shannon entropy, and the one based on Rényi min-entropy. Furthermore, we discuss some applications in the area of privacy. In particular, we consider statistical databases and the recently-proposed notion of differential privacy. Using the information-theoretic view, we discuss the bound that differential privacy induces on leakage, and the trade-off between utility and privac
Information Leakage Games
We consider a game-theoretic setting to model the interplay between attacker
and defender in the context of information flow, and to reason about their
optimal strategies. In contrast with standard game theory, in our games the
utility of a mixed strategy is a convex function of the distribution on the
defender's pure actions, rather than the expected value of their utilities.
Nevertheless, the important properties of game theory, notably the existence of
a Nash equilibrium, still hold for our (zero-sum) leakage games, and we provide
algorithms to compute the corresponding optimal strategies. As typical in
(simultaneous) game theory, the optimal strategy is usually mixed, i.e.,
probabilistic, for both the attacker and the defender. From the point of view
of information flow, this was to be expected in the case of the defender, since
it is well known that randomization at the level of the system design may help
to reduce information leaks. Regarding the attacker, however, this seems the
first work (w.r.t. the literature in information flow) proving formally that in
certain cases the optimal attack strategy is necessarily probabilistic
Introduction to Arithmetic Mirror Symmetry
We describe how to find period integrals and Picard-Fuchs differential
equations for certain one-parameter families of Calabi-Yau manifolds. These
families can be seen as varieties over a finite field, in which case we show in
an explicit example that the number of points of a generic element can be given
in terms of p-adic period integrals. We also discuss several approaches to
finding zeta functions of mirror manifolds and their factorizations. These
notes are based on lectures given at the Fields Institute during the thematic
program on Calabi-Yau Varieties: Arithmetic, Geometry, and Physics
Generalized bisimulation metrics
International audienceThe pseudometric based on the Kantorovich lifting is one of the most popular notion of distance between probabilistic processes proposed in the literature. However, its application in verification is limited to linear properties. We propose a generalization which allows to deal with a wider class of properties, such as those used in security and privacy. More precisely, we propose a family of pseudometrics, parametrized on a notion of distance which depends on the property we want to verify. Furthermore, we show that the members of this family still characterize bisimilarity in terms of their kernel, and provide a bound on the corresponding distance between trace distributions. Finally, we study the instance corresponding to differential privacy, and we show that it has a dual form, easier to compute. We also prove that the typical process-algebra constructs are non-expansive, thus paving the way to a modular approach to verification
- …
