7 research outputs found
Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training
Despite their performance, Artificial Neural Networks are not reliable enough
for most of industrial applications. They are sensitive to noises, rotations,
blurs and adversarial examples. There is a need to build defenses that protect
against a wide range of perturbations, covering the most traditional common
corruptions and adversarial examples. We propose a new data augmentation
strategy called M-TLAT and designed to address robustness in a broad sense. Our
approach combines the Mixup augmentation and a new adversarial training
algorithm called Targeted Labeling Adversarial Training (TLAT). The idea of
TLAT is to interpolate the target labels of adversarial examples with the
ground-truth labels. We show that M-TLAT can increase the robustness of image
classifiers towards nineteen common corruptions and five adversarial attacks,
without reducing the accuracy on clean samples
Improving Model Robustness with Latent Distribution Locally and Globally
In this work, we consider model robustness of deep neural networks against
adversarial attacks from a global manifold perspective. Leveraging both the
local and global latent information, we propose a novel adversarial training
method through robust optimization, and a tractable way to generate Latent
Manifold Adversarial Examples (LMAEs) via an adversarial game between a
discriminator and a classifier. The proposed adversarial training with latent
distribution (ATLD) method defends against adversarial attacks by crafting
LMAEs with the latent manifold in an unsupervised manner. ATLD preserves the
local and global information of latent manifold and promises improved
robustness against adversarial attacks. To verify the effectiveness of our
proposed method, we conduct extensive experiments over different datasets
(e.g., CIFAR-10, CIFAR-100, SVHN) with different adversarial attacks (e.g.,
PGD, CW), and show that our method substantially outperforms the
state-of-the-art (e.g., Feature Scattering) in adversarial robustness by a
large accuracy margin. The source codes are available at
https://github.com/LitterQ/ATLD-pytorch
The Paradox of Noise: An Empirical Study of Noise-Infusion Mechanisms to Improve Generalization, Stability, and Privacy in Federated Learning
In a data-centric era, concerns regarding privacy and ethical data handling
grow as machine learning relies more on personal information. This empirical
study investigates the privacy, generalization, and stability of deep learning
models in the presence of additive noise in federated learning frameworks. Our
main objective is to provide strategies to measure the generalization,
stability, and privacy-preserving capabilities of these models and further
improve them. To this end, five noise infusion mechanisms at varying noise
levels within centralized and federated learning settings are explored. As
model complexity is a key component of the generalization and stability of deep
learning models during training and evaluation, a comparative analysis of three
Convolutional Neural Network (CNN) architectures is provided. The paper
introduces Signal-to-Noise Ratio (SNR) as a quantitative measure of the
trade-off between privacy and training accuracy of noise-infused models, aiming
to find the noise level that yields optimal privacy and accuracy. Moreover, the
Price of Stability and Price of Anarchy are defined in the context of
privacy-preserving deep learning, contributing to the systematic investigation
of the noise infusion strategies to enhance privacy without compromising
performance. Our research sheds light on the delicate balance between these
critical factors, fostering a deeper understanding of the implications of
noise-based regularization in machine learning. By leveraging noise as a tool
for regularization and privacy enhancement, we aim to contribute to the
development of robust, privacy-aware algorithms, ensuring that AI-driven
solutions prioritize both utility and privacy