85 research outputs found
Dataset Distillation: A Comprehensive Review
Recent success of deep learning is largely attributed to the sheer amount of
data used for training deep neural networks.Despite the unprecedented success,
the massive data, unfortunately, significantly increases the burden on storage
and transmission and further gives rise to a cumbersome model training process.
Besides, relying on the raw data for training \emph{per se} yields concerns
about privacy and copyright. To alleviate these shortcomings, dataset
distillation~(DD), also known as dataset condensation (DC), was introduced and
has recently attracted much research attention in the community. Given an
original dataset, DD aims to derive a much smaller dataset containing synthetic
samples, based on which the trained models yield performance comparable with
those trained on the original dataset. In this paper, we give a comprehensive
review and summary of recent advances in DD and its application. We first
introduce the task formally and propose an overall algorithmic framework
followed by all existing DD methods. Next, we provide a systematic taxonomy of
current methodologies in this area, and discuss their theoretical
interconnections. We also present current challenges in DD through extensive
experiments and envision possible directions for future works.Comment: 23 pages, 168 references, 8 figures, under revie
InceptionNeXt: When Inception Meets ConvNeXt
Inspired by the long-range modeling ability of ViTs, large-kernel
convolutions are widely studied and adopted recently to enlarge the receptive
field and improve model performance, like the remarkable work ConvNeXt which
employs 7x7 depthwise convolution. Although such depthwise operator only
consumes a few FLOPs, it largely harms the model efficiency on powerful
computing devices due to the high memory access costs. For example, ConvNeXt-T
has similar FLOPs with ResNet-50 but only achieves 60% throughputs when trained
on A100 GPUs with full precision. Although reducing the kernel size of ConvNeXt
can improve speed, it results in significant performance degradation. It is
still unclear how to speed up large-kernel-based CNN models while preserving
their performance. To tackle this issue, inspired by Inceptions, we propose to
decompose large-kernel depthwise convolution into four parallel branches along
channel dimension, i.e. small square kernel, two orthogonal band kernels, and
an identity mapping. With this new Inception depthwise convolution, we build a
series of networks, namely IncepitonNeXt, which not only enjoy high throughputs
but also maintain competitive performance. For instance, InceptionNeXt-T
achieves 1.6x higher training throughputs than ConvNeX-T, as well as attains
0.2% top-1 accuracy improvement on ImageNet-1K. We anticipate InceptionNeXt can
serve as an economical baseline for future architecture design to reduce carbon
footprint. Code is available at https://github.com/sail-sg/inceptionnext.Comment: Code: https://github.com/sail-sg/inceptionnex
Task Residual for Tuning Vision-Language Models
Large-scale vision-language models (VLMs) pre-trained on billion-level data
have learned general visual representations and broad visual concepts. In
principle, the well-learned knowledge structure of the VLMs should be inherited
appropriately when being transferred to downstream tasks with limited data.
However, most existing efficient transfer learning (ETL) approaches for VLMs
either damage or are excessively biased towards the prior knowledge, e.g.,
prompt tuning (PT) discards the pre-trained text-based classifier and builds a
new one while adapter-style tuning (AT) fully relies on the pre-trained
features. To address this, we propose a new efficient tuning approach for VLMs
named Task Residual Tuning (TaskRes), which performs directly on the text-based
classifier and explicitly decouples the prior knowledge of the pre-trained
models and new knowledge regarding a target task. Specifically, TaskRes keeps
the original classifier weights from the VLMs frozen and obtains a new
classifier for the target task by tuning a set of prior-independent parameters
as a residual to the original one, which enables reliable prior knowledge
preservation and flexible task-specific knowledge exploration. The proposed
TaskRes is simple yet effective, which significantly outperforms previous ETL
methods (e.g., PT and AT) on 11 benchmark datasets while requiring minimal
effort for the implementation. Our code is available at
https://github.com/geekyutao/TaskRes.Comment: Accepted to CVPR 202
EqMotion: Equivariant Multi-agent Motion Prediction with Invariant Interaction Reasoning
Learning to predict agent motions with relationship reasoning is important
for many applications. In motion prediction tasks, maintaining motion
equivariance under Euclidean geometric transformations and invariance of agent
interaction is a critical and fundamental principle. However, such equivariance
and invariance properties are overlooked by most existing methods. To fill this
gap, we propose EqMotion, an efficient equivariant motion prediction model with
invariant interaction reasoning. To achieve motion equivariance, we propose an
equivariant geometric feature learning module to learn a Euclidean
transformable feature through dedicated designs of equivariant operations. To
reason agent's interactions, we propose an invariant interaction reasoning
module to achieve a more stable interaction modeling. To further promote more
comprehensive motion features, we propose an invariant pattern feature learning
module to learn an invariant pattern feature, which cooperates with the
equivariant geometric feature to enhance network expressiveness. We conduct
experiments for the proposed model on four distinct scenarios: particle
dynamics, molecule dynamics, human skeleton motion prediction and pedestrian
trajectory prediction. Experimental results show that our method is not only
generally applicable, but also achieves state-of-the-art prediction
performances on all the four tasks, improving by 24.0/30.1/8.6/9.2%. Code is
available at https://github.com/MediaBrain-SJTU/EqMotion.Comment: Accepted to CVPR 202
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet.Comment: Code and data: https://github.com/yuweihao/MM-Ve
MetaFormer Is Actually What You Need for Vision
Transformers have shown great potential in computer vision tasks. A common
belief is their attention-based token mixer module contributes most to their
competence. However, recent works show the attention-based module in
Transformers can be replaced by spatial MLPs and the resulted models still
perform quite well. Based on this observation, we hypothesize that the general
architecture of the Transformers, instead of the specific token mixer module,
is more essential to the model's performance. To verify this, we deliberately
replace the attention module in Transformers with an embarrassingly simple
spatial pooling operator to conduct only basic token mixing. Surprisingly, we
observe that the derived model, termed as PoolFormer, achieves competitive
performance on multiple computer vision tasks. For example, on ImageNet-1K,
PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned Vision
Transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with
35%/52% fewer parameters and 50%/62% fewer MACs. The effectiveness of
PoolFormer verifies our hypothesis and urges us to initiate the concept of
"MetaFormer", a general architecture abstracted from Transformers without
specifying the token mixer. Based on the extensive experiments, we argue that
MetaFormer is the key player in achieving superior results for recent
Transformer and MLP-like models on vision tasks. This work calls for more
future research dedicated to improving MetaFormer instead of focusing on the
token mixer modules. Additionally, our proposed PoolFormer could serve as a
starting baseline for future MetaFormer architecture design. Code is available
at https://github.com/sail-sg/poolformer.Comment: CVPR 2022 (Oral). Code: https://github.com/sail-sg/poolforme
- …