204,780 research outputs found
Machine Learning Models that Remember Too Much
Machine learning (ML) is becoming a commodity. Numerous ML frameworks and
services are available to data holders who are not ML experts but want to train
predictive models on their data. It is important that ML models trained on
sensitive inputs (e.g., personal images or documents) not leak too much
information about the training data.
We consider a malicious ML provider who supplies model-training code to the
data holder, does not observe the training, but then obtains white- or
black-box access to the resulting model. In this setting, we design and
implement practical algorithms, some of them very similar to standard ML
techniques such as regularization and data augmentation, that "memorize"
information about the training dataset in the model yet the model is as
accurate and predictive as a conventionally trained model. We then explain how
the adversary can extract memorized information from the model.
We evaluate our techniques on standard ML tasks for image classification
(CIFAR10), face recognition (LFW and FaceScrub), and text analysis (20
Newsgroups and IMDB). In all cases, we show how our algorithms create models
that have high predictive power yet allow accurate extraction of subsets of
their training data
Efficient Two-Step Adversarial Defense for Deep Neural Networks
In recent years, deep neural networks have demonstrated outstanding
performance in many machine learning tasks. However, researchers have
discovered that these state-of-the-art models are vulnerable to adversarial
examples: legitimate examples added by small perturbations which are
unnoticeable to human eyes. Adversarial training, which augments the training
data with adversarial examples during the training process, is a well known
defense to improve the robustness of the model against adversarial attacks.
However, this robustness is only effective to the same attack method used for
adversarial training. Madry et al.(2017) suggest that effectiveness of
iterative multi-step adversarial attacks and particularly that projected
gradient descent (PGD) may be considered the universal first order adversary
and applying the adversarial training with PGD implies resistance against many
other first order attacks. However, the computational cost of the adversarial
training with PGD and other multi-step adversarial examples is much higher than
that of the adversarial training with other simpler attack techniques. In this
paper, we show how strong adversarial examples can be generated only at a cost
similar to that of two runs of the fast gradient sign method (FGSM), allowing
defense against adversarial attacks with a robustness level comparable to that
of the adversarial training with multi-step adversarial examples. We
empirically demonstrate the effectiveness of the proposed two-step defense
approach against different attack methods and its improvements over existing
defense strategies.Comment: 12 page
Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability
Post-hoc model-agnostic interpretation methods such as partial dependence
plots can be employed to interpret complex machine learning models. While these
interpretation methods can be applied regardless of model complexity, they can
produce misleading and verbose results if the model is too complex, especially
w.r.t. feature interactions. To quantify the complexity of arbitrary machine
learning models, we propose model-agnostic complexity measures based on
functional decomposition: number of features used, interaction strength and
main effect complexity. We show that post-hoc interpretation of models that
minimize the three measures is more reliable and compact. Furthermore, we
demonstrate the application of these measures in a multi-objective optimization
approach which simultaneously minimizes loss and complexity
- …