25 research outputs found
SoK: Differential Privacies
Shortly after it was first introduced in 2006, differential privacy became
the flagship data privacy definition. Since then, numerous variants and
extensions were proposed to adapt it to different scenarios and attacker
models. In this work, we propose a systematic taxonomy of these variants and
extensions. We list all data privacy definitions based on differential privacy,
and partition them into seven categories, depending on which aspect of the
original definition is modified.
These categories act like dimensions: variants from the same category cannot
be combined, but variants from different categories can be combined to form new
definitions. We also establish a partial ordering of relative strength between
these notions by summarizing existing results. Furthermore, we list which of
these definitions satisfy some desirable properties, like composition,
post-processing, and convexity by either providing a novel proof or collecting
existing ones.Comment: This is the full version of the SoK paper with the same title,
accepted at PETS (Privacy Enhancing Technologies Symposium) 202
Anytime Algorithms for Non-Ending Computations
A program which eventually stops but does not halt “too quickly” halts at a time which is algorithmically compressible. This result — originally proved in [4] — is proved in a more general setting. Following Manin [11] we convert the result into an anytime algorithm for the halting problem and we show that the stopping time (cut-off temporal bound) cannot be significantly improved
DP-SIPS: A simpler, more scalable mechanism for differentially private partition selection
Partition selection, or set union, is an important primitive in
differentially private mechanism design: in a database where each user
contributes a list of items, the goal is to publish as many of these items as
possible under differential privacy. In this work, we present a novel mechanism
for differentially private partition selection. This mechanism, which we call
DP-SIPS, is very simple: it consists of iterating the naive algorithm over the
data set multiple times, removing the released partitions from the data set
while increasing the privacy budget at each step. This approach preserves the
scalability benefits of the naive mechanism, yet its utility compares favorably
to more complex approaches developed in prior work
SoK: Differential privacies
Shortly after it was first introduced in 2006, differential privacy became the flagship data privacy definition. Since then, numerous variants and extensions were proposed to adapt it to different scenarios and attacker models. In this work, we propose a systematic taxonomy of these variants and extensions. We list all data privacy definitions based on differential privacy, and partition them into seven categories, depending on which aspect of the original definition is modified