4 research outputs found
Towards Poisoning Fair Representations
Fair machine learning seeks to mitigate model prediction bias against certain
demographic subgroups such as elder and female. Recently, fair representation
learning (FRL) trained by deep neural networks has demonstrated superior
performance, whereby representations containing no demographic information are
inferred from the data and then used as the input to classification or other
downstream tasks. Despite the development of FRL methods, their vulnerability
under data poisoning attack, a popular protocol to benchmark model robustness
under adversarial scenarios, is under-explored. Data poisoning attacks have
been developed for classical fair machine learning methods which incorporate
fairness constraints into shallow-model classifiers. Nonetheless, these attacks
fall short in FRL due to notably different fairness goals and model
architectures. This work proposes the first data poisoning framework attacking
FRL. We induce the model to output unfair representations that contain as much
demographic information as possible by injecting carefully crafted poisoning
samples into the training data. This attack entails a prohibitive bilevel
optimization, wherefore an effective approximated solution is proposed. A
theoretical analysis on the needed number of poisoning samples is derived and
sheds light on defending against the attack. Experiments on benchmark fairness
datasets and state-of-the-art fair representation learning models demonstrate
the superiority of our attack
Label Poisoning is All You Need
In a backdoor attack, an adversary injects corrupted data into a model's
training dataset in order to gain control over its predictions on images with a
specific attacker-defined trigger. A typical corrupted training example
requires altering both the image, by applying the trigger, and the label.
Models trained on clean images, therefore, were considered safe from backdoor
attacks. However, in some common machine learning scenarios, the training
labels are provided by potentially malicious third-parties. This includes
crowd-sourced annotation and knowledge distillation. We, hence, investigate a
fundamental question: can we launch a successful backdoor attack by only
corrupting labels? We introduce a novel approach to design label-only backdoor
attacks, which we call FLIP, and demonstrate its strengths on three datasets
(CIFAR-10, CIFAR-100, and Tiny-ImageNet) and four architectures (ResNet-32,
ResNet-18, VGG-19, and Vision Transformer). With only 2% of CIFAR-10 labels
corrupted, FLIP achieves a near-perfect attack success rate of 99.4% while
suffering only a 1.8% drop in the clean test accuracy. Our approach builds upon
the recent advances in trajectory matching, originally introduced for dataset
distillation