222 research outputs found
Fast Adversarial Training with Smooth Convergence
Fast adversarial training (FAT) is beneficial for improving the adversarial
robustness of neural networks. However, previous FAT work has encountered a
significant issue known as catastrophic overfitting when dealing with large
perturbation budgets, \ie the adversarial robustness of models declines to near
zero during training.
To address this, we analyze the training process of prior FAT work and
observe that catastrophic overfitting is accompanied by the appearance of loss
convergence outliers.
Therefore, we argue a moderately smooth loss convergence process will be a
stable FAT process that solves catastrophic overfitting.
To obtain a smooth loss convergence process, we propose a novel oscillatory
constraint (dubbed ConvergeSmooth) to limit the loss difference between
adjacent epochs. The convergence stride of ConvergeSmooth is introduced to
balance convergence and smoothing. Likewise, we design weight centralization
without introducing additional hyperparameters other than the loss balance
coefficient.
Our proposed methods are attack-agnostic and thus can improve the training
stability of various FAT techniques.
Extensive experiments on popular datasets show that the proposed methods
efficiently avoid catastrophic overfitting and outperform all previous FAT
methods. Code is available at \url{https://github.com/FAT-CS/ConvergeSmooth}
ComPtr: Towards Diverse Bi-source Dense Prediction Tasks via A Simple yet General Complementary Transformer
Deep learning (DL) has advanced the field of dense prediction, while
gradually dissolving the inherent barriers between different tasks. However,
most existing works focus on designing architectures and constructing visual
cues only for the specific task, which ignores the potential uniformity
introduced by the DL paradigm. In this paper, we attempt to construct a novel
\underline{ComP}lementary \underline{tr}ansformer, \textbf{ComPtr}, for diverse
bi-source dense prediction tasks. Specifically, unlike existing methods that
over-specialize in a single task or a subset of tasks, ComPtr starts from the
more general concept of bi-source dense prediction. Based on the basic
dependence on information complementarity, we propose consistency enhancement
and difference awareness components with which ComPtr can evacuate and collect
important visual semantic cues from different image sources for diverse tasks,
respectively. ComPtr treats different inputs equally and builds an efficient
dense interaction model in the form of sequence-to-sequence on top of the
transformer. This task-generic design provides a smooth foundation for
constructing the unified model that can simultaneously deal with various
bi-source information. In extensive experiments across several representative
vision tasks, i.e. remote sensing change detection, RGB-T crowd counting,
RGB-D/T salient object detection, and RGB-D semantic segmentation, the proposed
method consistently obtains favorable performance. The code will be available
at \url{https://github.com/lartpang/ComPtr}
Catastrophic Overfitting: A Potential Blessing in Disguise
Fast Adversarial Training (FAT) has gained increasing attention within the
research community owing to its efficacy in improving adversarial robustness.
Particularly noteworthy is the challenge posed by catastrophic overfitting (CO)
in this field. Although existing FAT approaches have made strides in mitigating
CO, the ascent of adversarial robustness occurs with a non-negligible decline
in classification accuracy on clean samples. To tackle this issue, we initially
employ the feature activation differences between clean and adversarial
examples to analyze the underlying causes of CO. Intriguingly, our findings
reveal that CO can be attributed to the feature coverage induced by a few
specific pathways. By intentionally manipulating feature activation differences
in these pathways with well-designed regularization terms, we can effectively
mitigate and induce CO, providing further evidence for this observation.
Notably, models trained stably with these terms exhibit superior performance
compared to prior FAT work. On this basis, we harness CO to achieve `attack
obfuscation', aiming to bolster model performance. Consequently, the models
suffering from CO can attain optimal classification accuracy on both clean and
adversarial data when adding random noise to inputs during evaluation. We also
validate their robustness against transferred adversarial examples and the
necessity of inducing CO to improve robustness. Hence, CO may not be a problem
that has to be solved
Multi-scale Interactive Network for Salient Object Detection
Deep-learning based salient object detection methods achieve great progress.
However, the variable scale and unknown category of salient objects are great
challenges all the time. These are closely related to the utilization of
multi-level and multi-scale features. In this paper, we propose the aggregate
interaction modules to integrate the features from adjacent levels, in which
less noise is introduced because of only using small up-/down-sampling rates.
To obtain more efficient multi-scale features from the integrated features, the
self-interaction modules are embedded in each decoder unit. Besides, the class
imbalance issue caused by the scale variation weakens the effect of the binary
cross entropy loss and results in the spatial inconsistency of the predictions.
Therefore, we exploit the consistency-enhanced loss to highlight the
fore-/back-ground difference and preserve the intra-class consistency.
Experimental results on five benchmark datasets demonstrate that the proposed
method without any post-processing performs favorably against 23
state-of-the-art approaches. The source code will be publicly available at
https://github.com/lartpang/MINet.Comment: Accepted by CVPR 202
CAVER: Cross-Modal View-Mixed Transformer for Bi-Modal Salient Object Detection
Most of the existing bi-modal (RGB-D and RGB-T) salient object detection
methods utilize the convolution operation and construct complex interweave
fusion structures to achieve cross-modal information integration. The inherent
local connectivity of the convolution operation constrains the performance of
the convolution-based methods to a ceiling. In this work, we rethink these
tasks from the perspective of global information alignment and transformation.
Specifically, the proposed \underline{c}ross-mod\underline{a}l
\underline{v}iew-mixed transform\underline{er} (CAVER) cascades several
cross-modal integration units to construct a top-down transformer-based
information propagation path. CAVER treats the multi-scale and multi-modal
feature integration as a sequence-to-sequence context propagation and update
process built on a novel view-mixed attention mechanism. Besides, considering
the quadratic complexity w.r.t. the number of input tokens, we design a
parameter-free patch-wise token re-embedding strategy to simplify operations.
Extensive experimental results on RGB-D and RGB-T SOD datasets demonstrate that
such a simple two-stream encoder-decoder framework can surpass recent
state-of-the-art methods when it is equipped with the proposed components.Comment: Updated version, more flexible structure, better performanc
- …