690 research outputs found
How Much Does Each Datapoint Leak Your Privacy? Quantifying the Per-datum Membership Leakage
We study the per-datum Membership Inference Attacks (MIAs), where an attacker
aims to infer whether a fixed target datum has been included in the input
dataset of an algorithm and thus, violates privacy. First, we define the
membership leakage of a datum as the advantage of the optimal adversary
targeting to identify it. Then, we quantify the per-datum membership leakage
for the empirical mean, and show that it depends on the Mahalanobis distance
between the target datum and the data-generating distribution. We further
assess the effect of two privacy defences, i.e. adding Gaussian noise and
sub-sampling. We quantify exactly how both of them decrease the per-datum
membership leakage. Our analysis builds on a novel proof technique that
combines an Edgeworth expansion of the likelihood ratio test and a
Lindeberg-Feller central limit theorem. Our analysis connects the existing
likelihood ratio and scalar product attacks, and also justifies different
canary selection strategies used in the privacy auditing literature. Finally,
our experiments demonstrate the impacts of the leakage score, the sub-sampling
ratio and the noise scale on the per-datum membership leakage as indicated by
the theory
Non-Asymptotic Lower Bounds For Training Data Reconstruction
We investigate semantic guarantees of private learning algorithms for their
resilience to training Data Reconstruction Attacks (DRAs) by informed
adversaries. To this end, we derive non-asymptotic minimax lower bounds on the
adversary's reconstruction error against learners that satisfy differential
privacy (DP) and metric differential privacy (mDP). Furthermore, we demonstrate
that our lower bound analysis for the latter also covers the high dimensional
regime, wherein, the input data dimensionality may be larger than the
adversary's query budget. Motivated by the theoretical improvements conferred
by metric DP, we extend the privacy analysis of popular deep learning
algorithms such as DP-SGD and Projected Noisy SGD to cover the broader notion
of metric differential privacy.Comment: Corrected minor typo
From Principle to Practice: Vertical Data Minimization for Machine Learning
Aiming to train and deploy predictive models, organizations collect large
amounts of detailed client data, risking the exposure of private information in
the event of a breach. To mitigate this, policymakers increasingly demand
compliance with the data minimization (DM) principle, restricting data
collection to only that data which is relevant and necessary for the task.
Despite regulatory pressure, the problem of deploying machine learning models
that obey DM has so far received little attention. In this work, we address
this challenge in a comprehensive manner. We propose a novel vertical DM (vDM)
workflow based on data generalization, which by design ensures that no
full-resolution client data is collected during training and deployment of
models, benefiting client privacy by reducing the attack surface in case of a
breach. We formalize and study the corresponding problem of finding
generalizations that both maximize data utility and minimize empirical privacy
risk, which we quantify by introducing a diverse set of policy-aligned
adversarial scenarios. Finally, we propose a range of baseline vDM algorithms,
as well as Privacy-aware Tree (PAT), an especially effective vDM algorithm that
outperforms all baselines across several settings. We plan to release our code
as a publicly available library, helping advance the standardization of DM for
machine learning. Overall, we believe our work can help lay the foundation for
further exploration and adoption of DM principles in real-world applications.Comment: Accepted at IEEE S&P 202
- …