Multi-Label Continual Learning (MLCL) builds a class-incremental framework in
a sequential multi-label image recognition data stream. The critical challenges
of MLCL are the construction of label relationships on past-missing and
future-missing partial labels of training data and the catastrophic forgetting
on old classes, resulting in poor generalization. To solve the problems, the
study proposes an Augmented Graph Convolutional Network (AGCN++) that can
construct the cross-task label relationships in MLCL and sustain catastrophic
forgetting. First, we build an Augmented Correlation Matrix (ACM) across all
seen classes, where the intra-task relationships derive from the hard label
statistics. In contrast, the inter-task relationships leverage hard and soft
labels from data and a constructed expert network. Then, we propose a novel
partial label encoder (PLE) for MLCL, which can extract dynamic class
representation for each partial label image as graph nodes and help generate
soft labels to create a more convincing ACM and suppress forgetting. Last, to
suppress the forgetting of label dependencies across old tasks, we propose a
relationship-preserving constrainter to construct label relationships. The
inter-class topology can be augmented automatically, which also yields
effective class representations. The proposed method is evaluated using two
multi-label image benchmarks. The experimental results show that the proposed
way is effective for MLCL image recognition and can build convincing
correlations across tasks even if the labels of previous tasks are missing