Large amounts of incremental learning algorithms have been proposed to
alleviate the catastrophic forgetting issue arises while dealing with
sequential data on a time series. However, the adversarial robustness of
incremental learners has not been widely verified, leaving potential security
risks. Specifically, for poisoning-based backdoor attacks, we argue that the
nature of streaming data in IL provides great convenience to the adversary by
creating the possibility of distributed and cross-task attacks -- an adversary
can affect \textbf{any unknown} previous or subsequent task by data poisoning
\textbf{at any time or time series} with extremely small amount of backdoor
samples injected (e.g., 0.1% based on our observations). To attract the
attention of the research community, in this paper, we empirically reveal the
high vulnerability of 11 typical incremental learners against poisoning-based
backdoor attack on 3 learning scenarios, especially the cross-task
generalization effect of backdoor knowledge, while the poison ratios range from
5% to as low as 0.1%. Finally, the defense mechanism based on activation
clustering is found to be effective in detecting our trigger pattern to
mitigate potential security risks