4 research outputs found
Channels of Small Log-Ratio Leakage and Characterization of Two-Party Differentially Private Computation
Consider a PPT two-party protocol in which the parties get no private inputs and obtain outputs , and let and denote the parties\u27 individual views. Protocol has -agreement if . The leakage of is the amount of information a party obtains about the event ; that is, the leakage is the maximum, over , of the distance between and . Typically, this distance is measured in statistical distance, or, in the computational setting, in computational indistinguishability. For this choice, Wullschleger [TCC \u2709] showed that if then the protocol can be transformed into an OT protocol.
We consider measuring the protocol leakage by the log-ratio distance (which was popularized by its use in the differential privacy framework). The log-ratio distance between over domain is the minimal
for which, for every . In the computational setting, we use computational indistinguishability from having log-ratio distance . We show that a protocol with (noticeable)
accuracy can be transformed into an OT protocol (note that this allows ). We complete the picture, in this respect, showing that a protocol with does not necessarily imply OT. Our results hold for both the information theoretic and the computational settings, and can be viewed as a ``fine grained\u27\u27 approach to ``weak OT amplification\u27\u27.
We then use the above result to fully characterize the complexity of differentially private two-party computation for the XOR function, answering the open question put by Goyal, Khurana, Mironov, Pandey, and Sahai [ICALP \u2716] and Haitner, Nissim, Omri, Shaltiel, and Silbak [FOCS \u2718]. Specifically, we show that for any (noticeable) , a two-party protocol that computes the XOR function with -accuracy and -differential privacy can be transformed into an OT protocol. This improves upon Goyal et al. that only handle , and
upon Haitner et al. who showed that such a protocol implies (infinitely-often) key agreement (and not OT). Our characterization is tight since OT does not follow from protocols in which , and extends to functions (over many bits) that ``contain\u27\u27 an ``embedded copy\u27\u27 of the XOR function
Separating Key Agreement and Computational Differential Privacy
Two party differential privacy allows two parties who do not trust each
other, to come together and perform a joint analysis on their data whilst
maintaining individual-level privacy. We show that any efficient,
computationally differentially private protocol that has black-box access to
key agreement (and nothing stronger), is also an efficient,
information-theoretically differentially private protocol. In other words, the
existence of efficient key agreement protocols is insufficient for efficient,
computationally differentially private protocols. In doing so, we make progress
in answering an open question posed by Vadhan about the minimal computational
assumption needed for computational differential privacy.
Combined with the information-theoretic lower bound due to McGregor, Mironov,
Pitassi, Reingold, Talwar, and Vadhan in [FOCS'10], we show that there is no
fully black-box reduction from efficient, computationally differentially
private protocols for computing the Hamming distance (or equivalently inner
product over the integers) on bits, with additive error lower than
, to key agreement.
This complements the result by Haitner, Mazor, Silbak, and Tsfadia in
[STOC'22], which showed that computing the Hamming distance implies key
agreement. We conclude that key agreement is \emph{strictly} weaker than
computational differential privacy for computing the inner product, thereby
answering their open question on whether key agreement is sufficient
SoK: Differential Privacies
Shortly after it was first introduced in 2006, differential privacy became
the flagship data privacy definition. Since then, numerous variants and
extensions were proposed to adapt it to different scenarios and attacker
models. In this work, we propose a systematic taxonomy of these variants and
extensions. We list all data privacy definitions based on differential privacy,
and partition them into seven categories, depending on which aspect of the
original definition is modified.
These categories act like dimensions: variants from the same category cannot
be combined, but variants from different categories can be combined to form new
definitions. We also establish a partial ordering of relative strength between
these notions by summarizing existing results. Furthermore, we list which of
these definitions satisfy some desirable properties, like composition,
post-processing, and convexity by either providing a novel proof or collecting
existing ones.Comment: This is the full version of the SoK paper with the same title,
accepted at PETS (Privacy Enhancing Technologies Symposium) 202
SoK: Differential privacies
Shortly after it was first introduced in 2006, differential privacy became the flagship data privacy definition. Since then, numerous variants and extensions were proposed to adapt it to different scenarios and attacker models. In this work, we propose a systematic taxonomy of these variants and extensions. We list all data privacy definitions based on differential privacy, and partition them into seven categories, depending on which aspect of the original definition is modified