15,032 research outputs found
Adversarial Robust Memory-Based Continual Learner
Despite the remarkable advances that have been made in continual learning,
the adversarial vulnerability of such methods has not been fully discussed. We
delve into the adversarial robustness of memory-based continual learning
algorithms and observe limited robustness improvement by directly applying
adversarial training techniques. Preliminary studies reveal the twin challenges
for building adversarial robust continual learners: accelerated forgetting in
continual learning and gradient obfuscation in adversarial robustness. In this
study, we put forward a novel adversarial robust memory-based continual learner
that adjusts data logits to mitigate the forgetting of pasts caused by
adversarial samples. Furthermore, we devise a gradient-based data selection
mechanism to overcome the gradient obfuscation caused by limited stored data.
The proposed approach can widely integrate with existing memory-based continual
learning as well as adversarial training algorithms in a plug-and-play way.
Extensive experiments on Split-CIFAR10/100 and Split-Tiny-ImageNet demonstrate
the effectiveness of our approach, achieving up to 8.13% higher accuracy for
adversarial data
Towards Adversarially Robust Continual Learning
Recent studies show that models trained by continual learning can achieve the
comparable performances as the standard supervised learning and the learning
flexibility of continual learning models enables their wide applications in the
real world. Deep learning models, however, are shown to be vulnerable to
adversarial attacks. Though there are many studies on the model robustness in
the context of standard supervised learning, protecting continual learning from
adversarial attacks has not yet been investigated. To fill in this research
gap, we are the first to study adversarial robustness in continual learning and
propose a novel method called \textbf{T}ask-\textbf{A}ware \textbf{B}oundary
\textbf{A}ugmentation (TABA) to boost the robustness of continual learning
models. With extensive experiments on CIFAR-10 and CIFAR-100, we show the
efficacy of adversarial training and TABA in defending adversarial attacks.Comment: ICASSP 202
Susceptibility of Continual Learning Against Adversarial Attacks
Recent continual learning approaches have primarily focused on mitigating
catastrophic forgetting. Nevertheless, two critical areas have remained
relatively unexplored: 1) evaluating the robustness of proposed methods and 2)
ensuring the security of learned tasks. This paper investigates the
susceptibility of continually learned tasks, including current and previously
acquired tasks, to adversarial attacks. Specifically, we have observed that any
class belonging to any task can be easily targeted and misclassified as the
desired target class of any other task. Such susceptibility or vulnerability of
learned tasks to adversarial attacks raises profound concerns regarding data
integrity and privacy. To assess the robustness of continual learning
approaches, we consider continual learning approaches in all three scenarios,
i.e., task-incremental learning, domain-incremental learning, and
class-incremental learning. In this regard, we explore the robustness of three
regularization-based methods, three replay-based approaches, and one hybrid
technique that combines replay and exemplar approaches. We empirically
demonstrated that in any setting of continual learning, any class, whether
belonging to the current or previously learned tasks, is susceptible to
misclassification. Our observations identify potential limitations of continual
learning approaches against adversarial attacks and highlight that current
continual learning algorithms could not be suitable for deployment in
real-world settings.Comment: 18 pages, 13 figure
PACOL: Poisoning Attacks Against Continual Learners
Continual learning algorithms are typically exposed to untrusted sources that
contain training data inserted by adversaries and bad actors. An adversary can
insert a small number of poisoned samples, such as mislabeled samples from
previously learned tasks, or intentional adversarial perturbed samples, into
the training datasets, which can drastically reduce the model's performance. In
this work, we demonstrate that continual learning systems can be manipulated by
malicious misinformation and present a new category of data poisoning attacks
specific for continual learners, which we refer to as {\em Poisoning Attacks
Against Continual Learners} (PACOL). The effectiveness of labeling flipping
attacks inspires PACOL; however, PACOL produces attack samples that do not
change the sample's label and produce an attack that causes catastrophic
forgetting. A comprehensive set of experiments shows the vulnerability of
commonly used generative replay and regularization-based continual learning
approaches against attack methods. We evaluate the ability of label-flipping
and a new adversarial poison attack, namely PACOL proposed in this work, to
force the continual learning system to forget the knowledge of a learned
task(s). More specifically, we compared the performance degradation of
continual learning systems trained on benchmark data streams with and without
poisoning attacks. Moreover, we discuss the stealthiness of the attacks in
which we test the success rate of data sanitization defense and other outlier
detection-based defenses for filtering out adversarial samples
Learning to Predict Gradients for Semi-Supervised Continual Learning
A key challenge for machine intelligence is to learn new visual concepts
without forgetting the previously acquired knowledge. Continual learning is
aimed towards addressing this challenge. However, there is a gap between
existing supervised continual learning and human-like intelligence, where human
is able to learn from both labeled and unlabeled data. How unlabeled data
affects learning and catastrophic forgetting in the continual learning process
remains unknown. To explore these issues, we formulate a new semi-supervised
continual learning method, which can be generically applied to existing
continual learning models. Specifically, a novel gradient learner learns from
labeled data to predict gradients on unlabeled data. Hence, the unlabeled data
could fit into the supervised continual learning method. Different from
conventional semi-supervised settings, we do not hypothesize that the
underlying classes, which are associated to the unlabeled data, are known to
the learning process. In other words, the unlabeled data could be very distinct
from the labeled data. We evaluate the proposed method on mainstream continual
learning, adversarial continual learning, and semi-supervised learning tasks.
The proposed method achieves state-of-the-art performance on classification
accuracy and backward transfer in the continual learning setting while
achieving desired performance on classification accuracy in the semi-supervised
learning setting. This implies that the unlabeled images can enhance the
generalizability of continual learning models on the predictive ability on
unseen data and significantly alleviate catastrophic forgetting. The code is
available at \url{https://github.com/luoyan407/grad_prediction.git}.Comment: Accepted by IEEE Transactions on Neural Networks and Learning Systems
(TNNLS
- …