104 research outputs found
Coping with Change: Learning Invariant and Minimum Sufficient Representations for Fine-Grained Visual Categorization
Fine-grained visual categorization (FGVC) is a challenging task due to
similar visual appearances between various species. Previous studies always
implicitly assume that the training and test data have the same underlying
distributions, and that features extracted by modern backbone architectures
remain discriminative and generalize well to unseen test data. However, we
empirically justify that these conditions are not always true on benchmark
datasets. To this end, we combine the merits of invariant risk minimization
(IRM) and information bottleneck (IB) principle to learn invariant and minimum
sufficient (IMS) representations for FGVC, such that the overall model can
always discover the most succinct and consistent fine-grained features. We
apply the matrix-based R{\'e}nyi's -order entropy to simplify and
stabilize the training of IB; we also design a ``soft" environment partition
scheme to make IRM applicable to FGVC task. To the best of our knowledge, we
are the first to address the problem of FGVC from a generalization perspective
and develop a new information-theoretic solution accordingly. Extensive
experiments demonstrate the consistent performance gain offered by our IMS.Comment: Manuscript accepted by CVIU, code is available at Githu
CDLT: A Dataset with Concept Drift and Long-Tailed Distribution for Fine-Grained Visual Categorization
Data is the foundation for the development of computer vision, and the
establishment of datasets plays an important role in advancing the techniques
of fine-grained visual categorization~(FGVC). In the existing FGVC datasets
used in computer vision, it is generally assumed that each collected instance
has fixed characteristics and the distribution of different categories is
relatively balanced. In contrast, the real world scenario reveals the fact that
the characteristics of instances tend to vary with time and exhibit a
long-tailed distribution. Hence, the collected datasets may mislead the
optimization of the fine-grained classifiers, resulting in unpleasant
performance in real applications. Starting from the real-world conditions and
to promote the practical progress of fine-grained visual categorization, we
present a Concept Drift and Long-Tailed Distribution dataset. Specifically, the
dataset is collected by gathering 11195 images of 250 instances in different
species for 47 consecutive months in their natural contexts. The collection
process involves dozens of crowd workers for photographing and domain experts
for labelling. Extensive baseline experiments using the state-of-the-art
fine-grained classification models demonstrate the issues of concept drift and
long-tailed distribution existed in the dataset, which require the attention of
future researches
CLIP2Scene: Towards Label-efficient 3D Scene Understanding by CLIP
Contrastive Language-Image Pre-training (CLIP) achieves promising results in
2D zero-shot and few-shot learning. Despite the impressive performance in 2D,
applying CLIP to help the learning in 3D scene understanding has yet to be
explored. In this paper, we make the first attempt to investigate how CLIP
knowledge benefits 3D scene understanding. We propose CLIP2Scene, a simple yet
effective framework that transfers CLIP knowledge from 2D image-text
pre-trained models to a 3D point cloud network. We show that the pre-trained 3D
network yields impressive performance on various downstream tasks, i.e.,
annotation-free and fine-tuning with labelled data for semantic segmentation.
Specifically, built upon CLIP, we design a Semantic-driven Cross-modal
Contrastive Learning framework that pre-trains a 3D network via semantic and
spatial-temporal consistency regularization. For the former, we first leverage
CLIP's text semantics to select the positive and negative point samples and
then employ the contrastive loss to train the 3D network. In terms of the
latter, we force the consistency between the temporally coherent point cloud
features and their corresponding image features. We conduct experiments on
SemanticKITTI, nuScenes, and ScanNet. For the first time, our pre-trained
network achieves annotation-free 3D semantic segmentation with 20.8% and 25.08%
mIoU on nuScenes and ScanNet, respectively. When fine-tuned with 1% or 100%
labelled data, our method significantly outperforms other self-supervised
methods, with improvements of 8% and 1% mIoU, respectively. Furthermore, we
demonstrate the generalizability for handling cross-domain datasets. Code is
publicly available https://github.com/runnanchen/CLIP2Scene.Comment: CVPR 202
- …