53 research outputs found
Formal Verification of Differential Privacy for Interactive Systems
Differential privacy is a promising approach to privacy preserving data
analysis with a well-developed theory for functions. Despite recent work on
implementing systems that aim to provide differential privacy, the problem of
formally verifying that these systems have differential privacy has not been
adequately addressed. This paper presents the first results towards automated
verification of source code for differentially private interactive systems. We
develop a formal probabilistic automaton model of differential privacy for
systems by adapting prior work on differential privacy for functions. The main
technical result of the paper is a sound proof technique based on a form of
probabilistic bisimulation relation for proving that a system modeled as a
probabilistic automaton satisfies differential privacy. The novelty lies in the
way we track quantitative privacy leakage bounds using a relation family
instead of a single relation. We illustrate the proof technique on a
representative automaton motivated by PINQ, an implemented system that is
intended to provide differential privacy. To make our proof technique easier to
apply to realistic systems, we prove a form of refinement theorem and apply it
to show that a refinement of the abstract PINQ automaton also satisfies our
differential privacy definition. Finally, we begin the process of automating
our proof technique by providing an algorithm for mechanically checking a
restricted class of relations from the proof technique.Comment: 65 pages with 1 figur
Using Participants' Utility Functions to Compare Versions of Differential Privacy
We use decision theory to compare variants of differential privacy from the
perspective of prospective study participants. We posit the existence of a
preference ordering on the set of potential consequences that study
participants can incur, which enables the analysis of individual utility
functions. Drawing upon the theory of measurement, we argue that changes in
expected utilities should be measured via the classic Euclidean metric. We then
consider the question of which privacy guarantees would be more appealing for
individuals under different decision settings. Through our analysis, we found
that the nature of the potential participant's utility function, along with the
specific values of and , can greatly alter which privacy
guarantees are preferable
On Modeling the Costs of Censorship
We argue that the evaluation of censorship evasion tools should depend upon
economic models of censorship. We illustrate our position with a simple model
of the costs of censorship. We show how this model makes suggestions for how to
evade censorship. In particular, from it, we develop evaluation criteria. We
examine how our criteria compare to the traditional methods of evaluation
employed in prior works
A Methodology for Information Flow Experiments
Information flow analysis has largely ignored the setting where the analyst
has neither control over nor a complete model of the analyzed system. We
formalize such limited information flow analyses and study an instance of it:
detecting the usage of data by websites. We prove that these problems are ones
of causal inference. Leveraging this connection, we push beyond traditional
information flow analysis to provide a systematic methodology based on
experimental science and statistical analysis. Our methodology allows us to
systematize prior works in the area viewing them as instances of a general
approach. Our systematic study leads to practical advice for improving work on
detecting data usage, a previously unformalized area. We illustrate these
concepts with a series of experiments collecting data on the use of information
by websites, which we statistically analyze
- …