1,234 research outputs found

    Robust Semi-Supervised Learning with Out of Distribution Data

    Full text link
    Recent Semi-supervised learning (SSL) works show significant improvement in SSL algorithms' performance using better-unlabeled data representations. However, recent work [Oliver et al., 2018] shows that the SSL algorithm's performance could degrade when the unlabeled set has out-of-distribution examples (OODs). In this work, we first study the critical causes of OOD's negative impact on SSL algorithms. We found that (1) the OOD's effect on the SSL algorithm's performance increases as its distance to the decision boundary decreases, and (2) Batch Normalization (BN), a popular module, could degrade the performance instead of improving the performance when the unlabeled set contains OODs. To address the above causes, we proposed a novel unified-robust SSL approach that can be easily extended to many existing SSL algorithms, and improve their robustness against OODs. In particular, we propose a simple modification of batch normalization, called weighted batch normalization, that improves BN's robustness against OODs. We also developed two efficient hyper-parameter optimization algorithms that have different tradeoffs in computational efficiency and accuracy. Extensive experiments on synthetic and real-world datasets prove that our proposed approaches significantly improves the robustness of four representative SSL algorithms against OODs compared with four state-of-the-art robust SSL approaches.Comment: Preprin

    Improving Robust Fairness via Balance Adversarial Training

    Full text link
    Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes, known as the robust fairness problem. Previously proposed Fair Robust Learning (FRL) adaptively reweights different classes to improve fairness. However, the performance of the better-performed classes decreases, leading to a strong performance drop. In this paper, we observed two unfair phenomena during adversarial training: different difficulties in generating adversarial examples from each class (source-class fairness) and disparate target class tendencies when generating adversarial examples (target-class fairness). From the observations, we propose Balance Adversarial Training (BAT) to address the robust fairness problem. Regarding source-class fairness, we adjust the attack strength and difficulties of each class to generate samples near the decision boundary for easier and fairer model learning; considering target-class fairness, by introducing a uniform distribution constraint, we encourage the adversarial example generation process for each class with a fair tendency. Extensive experiments conducted on multiple datasets (CIFAR-10, CIFAR-100, and ImageNette) demonstrate that our method can significantly outperform other baselines in mitigating the robust fairness problem (+5-10\% on the worst class accuracy
    • …
    corecore