Catastrophic forgetting of previous knowledge is a critical issue in
continual learning typically handled through various regularization strategies.
However, existing methods struggle especially when several incremental steps
are performed. In this paper, we extend our previous approach (RECALL) and
tackle forgetting by exploiting unsupervised web-crawled data to retrieve
examples of old classes from online databases. Differently from the original
approach that did not perform any evaluation of the web data, here we introduce
two novel approaches based on adversarial learning and adaptive thresholding to
select from web data only samples strongly resembling the statistics of the no
longer available training ones. Furthermore, we improved the pseudo-labeling
scheme to achieve a more accurate labeling of web data that also consider
classes being learned in the current step. Experimental results show that this
enhanced approach achieves remarkable results, especially when multiple
incremental learning steps are performed