107 research outputs found
Optimally convergent hybridizable discontinuous Galerkin method for fifth-order Korteweg-de Vries type equations
We develop and analyze the first hybridizable discontinuous Galerkin (HDG)
method for solving fifth-order Korteweg-de Vries (KdV) type equations. We show
that the semi-discrete scheme is stable with proper choices of the
stabilization functions in the numerical traces. For the linearized fifth-order
equations, we prove that the approximations to the exact solution and its four
spatial derivatives as well as its time derivative all have optimal convergence
rates. The numerical experiments, demonstrating optimal convergence rates for
both the linear and nonlinear equations, validate our theoretical findings
Heterogeneous Forgetting Compensation for Class-Incremental Learning
Class-incremental learning (CIL) has achieved remarkable successes in
learning new classes consecutively while overcoming catastrophic forgetting on
old categories. However, most existing CIL methods unreasonably assume that all
old categories have the same forgetting pace, and neglect negative influence of
forgetting heterogeneity among different old classes on forgetting
compensation. To surmount the above challenges, we develop a novel
Heterogeneous Forgetting Compensation (HFC) model, which can resolve
heterogeneous forgetting of easy-to-forget and hard-to-forget old categories
from both representation and gradient aspects. Specifically, we design a
task-semantic aggregation block to alleviate heterogeneous forgetting from
representation aspect. It aggregates local category information within each
task to learn task-shared global representations. Moreover, we develop two
novel plug-and-play losses: a gradient-balanced forgetting compensation loss
and a gradient-balanced relation distillation loss to alleviate forgetting from
gradient aspect. They consider gradient-balanced compensation to rectify
forgetting heterogeneity of old categories and heterogeneous relation
consistency. Experiments on several representative datasets illustrate
effectiveness of our HFC model. The code is available at
https://github.com/JiahuaDong/HFC.Comment: Accepted to ICCV202
Self-paced Weight Consolidation for Continual Learning
Continual learning algorithms which keep the parameters of new tasks close to
that of previous tasks, are popular in preventing catastrophic forgetting in
sequential task learning settings. However, 1) the performance for the new
continual learner will be degraded without distinguishing the contributions of
previously learned tasks; 2) the computational cost will be greatly increased
with the number of tasks, since most existing algorithms need to regularize all
previous tasks when learning new tasks. To address the above challenges, we
propose a self-paced Weight Consolidation (spWC) framework to attain robust
continual learning via evaluating the discriminative contributions of previous
tasks. To be specific, we develop a self-paced regularization to reflect the
priorities of past tasks via measuring difficulty based on key performance
indicator (i.e., accuracy). When encountering a new task, all previous tasks
are sorted from "difficult" to "easy" based on the priorities. Then the
parameters of the new continual learner will be learned via selectively
maintaining the knowledge amongst more difficult past tasks, which could well
overcome catastrophic forgetting with less computational cost. We adopt an
alternative convex search to iteratively update the model parameters and
priority weights in the bi-convex formulation. The proposed spWC framework is
plug-and-play, which is applicable to most continual learning algorithms (e.g.,
EWC, MAS and RCIL) in different directions (e.g., classification and
segmentation). Experimental results on several public benchmark datasets
demonstrate that our proposed framework can effectively improve performance
when compared with other popular continual learning algorithms
I3DOL: Incremental 3D Object Learning without Catastrophic Forgetting
3D object classification has attracted appealing attentions in academic
researches and industrial applications. However, most existing methods need to
access the training data of past 3D object classes when facing the common
real-world scenario: new classes of 3D objects arrive in a sequence. Moreover,
the performance of advanced approaches degrades dramatically for past learned
classes (i.e., catastrophic forgetting), due to the irregular and redundant
geometric structures of 3D point cloud data. To address these challenges, we
propose a new Incremental 3D Object Learning (i.e., I3DOL) model, which is the
first exploration to learn new classes of 3D object continually. Specifically,
an adaptive-geometric centroid module is designed to construct discriminative
local geometric structures, which can better characterize the irregular point
cloud representation for 3D object. Afterwards, to prevent the catastrophic
forgetting brought by redundant geometric information, a geometric-aware
attention mechanism is developed to quantify the contributions of local
geometric structures, and explore unique 3D geometric characteristics with high
contributions for classes incremental learning. Meanwhile, a score fairness
compensation strategy is proposed to further alleviate the catastrophic
forgetting caused by unbalanced data between past and new classes of 3D object,
by compensating biased prediction for new classes in the validation phase.
Experiments on 3D representative datasets validate the superiority of our I3DOL
framework.Comment: Accepted by Association for the Advancement of Artificial
Intelligence 2021 (AAAI 2021
Gradient-Semantic Compensation for Incremental Semantic Segmentation
Incremental semantic segmentation aims to continually learn the segmentation
of new coming classes without accessing the training data of previously learned
classes. However, most current methods fail to address catastrophic forgetting
and background shift since they 1) treat all previous classes equally without
considering different forgetting paces caused by imbalanced gradient
back-propagation; 2) lack strong semantic guidance between classes. To tackle
the above challenges, in this paper, we propose a Gradient-Semantic
Compensation (GSC) model, which surmounts incremental semantic segmentation
from both gradient and semantic perspectives. Specifically, to address
catastrophic forgetting from the gradient aspect, we develop a step-aware
gradient compensation that can balance forgetting paces of previously seen
classes via re-weighting gradient backpropagation. Meanwhile, we propose a
soft-sharp semantic relation distillation to distill consistent inter-class
semantic relations via soft labels for alleviating catastrophic forgetting from
the semantic aspect. In addition, we develop a prototypical pseudo re-labeling
that provides strong semantic guidance to mitigate background shift. It
produces high-quality pseudo labels for old classes in the background by
measuring distances between pixels and class-wise prototypes. Extensive
experiments on three public datasets, i.e., Pascal VOC 2012, ADE20K, and
Cityscapes, demonstrate the effectiveness of our proposed GSC model
- …