195 research outputs found
Knowledge Restore and Transfer for Multi-label Class-Incremental Learning
Current class-incremental learning research mainly focuses on single-label
classification tasks while multi-label class-incremental learning (MLCIL) with
more practical application scenarios is rarely studied. Although there have
been many anti-forgetting methods to solve the problem of catastrophic
forgetting in class-incremental learning, these methods have difficulty in
solving the MLCIL problem due to label absence and information dilution. In
this paper, we propose a knowledge restore and transfer (KRT) framework for
MLCIL, which includes a dynamic pseudo-label (DPL) module to restore the old
class knowledge and an incremental cross-attention(ICA) module to save
session-specific knowledge and transfer old class knowledge to the new model
sufficiently. Besides, we propose a token loss to jointly optimize the
incremental cross-attention module. Experimental results on MS-COCO and PASCAL
VOC datasets demonstrate the effectiveness of our method for improving
recognition performance and mitigating forgetting on multi-label
class-incremental learning tasks
Learning New Classes from Limited Data in Image Segmentation and Object Detection
L'abstract è presente nell'allegato / the abstract is in the attachmen
Efficient Curriculum based Continual Learning with Informative Subset Selection for Remote Sensing Scene Classification
We tackle the problem of class incremental learning (CIL) in the realm of
landcover classification from optical remote sensing (RS) images in this paper.
The paradigm of CIL has recently gained much prominence given the fact that
data are generally obtained in a sequential manner for real-world phenomenon.
However, CIL has not been extensively considered yet in the domain of RS
irrespective of the fact that the satellites tend to discover new classes at
different geographical locations temporally. With this motivation, we propose a
novel CIL framework inspired by the recent success of replay-memory based
approaches and tackling two of their shortcomings. In order to reduce the
effect of catastrophic forgetting of the old classes when a new stream arrives,
we learn a curriculum of the new classes based on their similarity with the old
classes. This is found to limit the degree of forgetting substantially. Next
while constructing the replay memory, instead of randomly selecting samples
from the old streams, we propose a sample selection strategy which ensures the
selection of highly confident samples so as to reduce the effects of noise. We
observe a sharp improvement in the CIL performance with the proposed
components. Experimental results on the benchmark NWPU-RESISC45, PatternNet,
and EuroSAT datasets confirm that our method offers improved
stability-plasticity trade-off than the literature
- …