2,200 research outputs found
When the signal is in the noise: Exploiting Diffix's Sticky Noise
Anonymized data is highly valuable to both businesses and researchers. A
large body of research has however shown the strong limits of the
de-identification release-and-forget model, where data is anonymized and
shared. This has led to the development of privacy-preserving query-based
systems. Based on the idea of "sticky noise", Diffix has been recently proposed
as a novel query-based mechanism satisfying alone the EU Article~29 Working
Party's definition of anonymization. According to its authors, Diffix adds less
noise to answers than solutions based on differential privacy while allowing
for an unlimited number of queries.
This paper presents a new class of noise-exploitation attacks, exploiting the
noise added by the system to infer private information about individuals in the
dataset. Our first differential attack uses samples extracted from Diffix in a
likelihood ratio test to discriminate between two probability distributions. We
show that using this attack against a synthetic best-case dataset allows us to
infer private information with 89.4% accuracy using only 5 attributes. Our
second cloning attack uses dummy conditions that conditionally strongly affect
the output of the query depending on the value of the private attribute. Using
this attack on four real-world datasets, we show that we can infer private
attributes of at least 93% of the users in the dataset with accuracy between
93.3% and 97.1%, issuing a median of 304 queries per user. We show how to
optimize this attack, targeting 55.4% of the users and achieving 91.7%
accuracy, using a maximum of only 32 queries per user.
Our attacks demonstrate that adding data-dependent noise, as done by Diffix,
is not sufficient to prevent inference of private attributes. We furthermore
argue that Diffix alone fails to satisfy Art. 29 WP's definition of
anonymization. [...
Lessons Learned: Surveying the Practicality of Differential Privacy in the Industry
Since its introduction in 2006, differential privacy has emerged as a
predominant statistical tool for quantifying data privacy in academic works.
Yet despite the plethora of research and open-source utilities that have
accompanied its rise, with limited exceptions, differential privacy has failed
to achieve widespread adoption in the enterprise domain. Our study aims to shed
light on the fundamental causes underlying this academic-industrial utilization
gap through detailed interviews of 24 privacy practitioners across 9 major
companies. We analyze the results of our survey to provide key findings and
suggestions for companies striving to improve privacy protection in their data
workflows and highlight the necessary and missing requirements of existing
differential privacy tools, with the goal of guiding researchers working
towards the broader adoption of differential privacy. Our findings indicate
that analysts suffer from lengthy bureaucratic processes for requesting access
to sensitive data, yet once granted, only scarcely-enforced privacy policies
stand between rogue practitioners and misuse of private information. We thus
argue that differential privacy can significantly improve the processes of
requesting and conducting data exploration across silos, and conclude that with
a few of the improvements suggested herein, the practical use of differential
privacy across the enterprise is within striking distance
Conclave: secure multi-party computation on big data (extended TR)
Secure Multi-Party Computation (MPC) allows mutually distrusting parties to
run joint computations without revealing private data. Current MPC algorithms
scale poorly with data size, which makes MPC on "big data" prohibitively slow
and inhibits its practical use.
Many relational analytics queries can maintain MPC's end-to-end security
guarantee without using cryptographic MPC techniques for all operations.
Conclave is a query compiler that accelerates such queries by transforming them
into a combination of data-parallel, local cleartext processing and small MPC
steps. When parties trust others with specific subsets of the data, Conclave
applies new hybrid MPC-cleartext protocols to run additional steps outside of
MPC and improve scalability further.
Our Conclave prototype generates code for cleartext processing in Python and
Spark, and for secure MPC using the Sharemind and Obliv-C frameworks. Conclave
scales to data sets between three and six orders of magnitude larger than
state-of-the-art MPC frameworks support on their own. Thanks to its hybrid
protocols, Conclave also substantially outperforms SMCQL, the most similar
existing system.Comment: Extended technical report for EuroSys 2019 pape
- …