49 research outputs found
LPN: Language-guided Prototypical Network for few-shot classification
Few-shot classification aims to adapt to new tasks with limited labeled
examples. To fully use the accessible data, recent methods explore suitable
measures for the similarity between the query and support images and better
high-dimensional features with meta-training and pre-training strategies.
However, the potential of multi-modality information has barely been explored,
which may bring promising improvement for few-shot classification. In this
paper, we propose a Language-guided Prototypical Network (LPN) for few-shot
classification, which leverages the complementarity of vision and language
modalities via two parallel branches. Concretely, to introduce language
modality with limited samples in the visual task, we leverage a pre-trained
text encoder to extract class-level text features directly from class names
while processing images with a conventional image encoder. Then, a
language-guided decoder is introduced to obtain text features corresponding to
each image by aligning class-level features with visual features. In addition,
to take advantage of class-level features and prototypes, we build a refined
prototypical head that generates robust prototypes in the text branch for
follow-up measurement. Finally, we aggregate the visual and text logits to
calibrate the deviation of a single modality. Extensive experiments demonstrate
the competitiveness of LPN against state-of-the-art methods on benchmark
datasets
Multimodal Prototype-Enhanced Network for Few-Shot Action Recognition
Current methods for few-shot action recognition mainly fall into the metric
learning framework following ProtoNet. However, they either ignore the effect
of representative prototypes or fail to enhance the prototypes with multimodal
information adequately. In this work, we propose a novel Multimodal
Prototype-Enhanced Network (MORN) to use the semantic information of label
texts as multimodal information to enhance prototypes, including two modality
flows. A CLIP visual encoder is introduced in the visual flow, and visual
prototypes are computed by the Temporal-Relational CrossTransformer (TRX)
module. A frozen CLIP text encoder is introduced in the text flow, and a
semantic-enhanced module is used to enhance text features. After inflating,
text prototypes are obtained. The final multimodal prototypes are then computed
by a multimodal prototype-enhanced module. Besides, there exist no evaluation
metrics to evaluate the quality of prototypes. To the best of our knowledge, we
are the first to propose a prototype evaluation metric called Prototype
Similarity Difference (PRIDE), which is used to evaluate the performance of
prototypes in discriminating different categories. We conduct extensive
experiments on four popular datasets. MORN achieves state-of-the-art results on
HMDB51, UCF101, Kinetics and SSv2. MORN also performs well on PRIDE, and we
explore the correlation between PRIDE and accuracy