31 research outputs found
Machine Learning Models that Remember Too Much
Machine learning (ML) is becoming a commodity. Numerous ML frameworks and
services are available to data holders who are not ML experts but want to train
predictive models on their data. It is important that ML models trained on
sensitive inputs (e.g., personal images or documents) not leak too much
information about the training data.
We consider a malicious ML provider who supplies model-training code to the
data holder, does not observe the training, but then obtains white- or
black-box access to the resulting model. In this setting, we design and
implement practical algorithms, some of them very similar to standard ML
techniques such as regularization and data augmentation, that "memorize"
information about the training dataset in the model yet the model is as
accurate and predictive as a conventionally trained model. We then explain how
the adversary can extract memorized information from the model.
We evaluate our techniques on standard ML tasks for image classification
(CIFAR10), face recognition (LFW and FaceScrub), and text analysis (20
Newsgroups and IMDB). In all cases, we show how our algorithms create models
that have high predictive power yet allow accurate extraction of subsets of
their training data
How Does Data Augmentation Affect Privacy in Machine Learning?
It is observed in the literature that data augmentation can significantly
mitigate membership inference (MI) attack. However, in this work, we challenge
this observation by proposing new MI attacks to utilize the information of
augmented data. MI attack is widely used to measure the model's information
leakage of the training set. We establish the optimal membership inference when
the model is trained with augmented data, which inspires us to formulate the MI
attack as a set classification problem, i.e., classifying a set of augmented
instances instead of a single data point, and design input permutation
invariant features. Empirically, we demonstrate that the proposed approach
universally outperforms original methods when the model is trained with data
augmentation. Even further, we show that the proposed approach can achieve
higher MI attack success rates on models trained with some data augmentation
than the existing methods on models trained without data augmentation. Notably,
we achieve a 70.1% MI attack success rate on CIFAR10 against a wide residual
network while the previous best approach only attains 61.9%. This suggests the
privacy risk of models trained with data augmentation could be largely
underestimated.Comment: AAAI Conference on Artificial Intelligence (AAAI-21). Source code
available at: https://github.com/dayu11/MI_with_D
Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation
Today's success of state of the art methods for semantic segmentation is
driven by large datasets. Data is considered an important asset that needs to
be protected, as the collection and annotation of such datasets comes at
significant efforts and associated costs. In addition, visual data might
contain private or sensitive information, that makes it equally unsuited for
public release. Unfortunately, recent work on membership inference in the
broader area of adversarial machine learning and inference attacks on machine
learning models has shown that even black box classifiers leak information on
the dataset that they were trained on. We show that such membership inference
attacks can be successfully carried out on complex, state of the art models for
semantic segmentation. In order to mitigate the associated risks, we also study
a series of defenses against such membership inference attacks and find
effective counter measures against the existing risks with little effect on the
utility of the segmentation method. Finally, we extensively evaluate our
attacks and defenses on a range of relevant real-world datasets: Cityscapes,
BDD100K, and Mapillary Vistas.Comment: Accepted to ECCV 2020. Code at:
https://github.com/SSAW14/segmentation_membership_inferenc