4,791 research outputs found
Res2Net: A New Multi-scale Backbone Architecture
Representing features at multiple scales is of great importance for numerous
vision tasks. Recent advances in backbone convolutional neural networks (CNNs)
continually demonstrate stronger multi-scale representation ability, leading to
consistent performance gains on a wide range of applications. However, most
existing methods represent the multi-scale features in a layer-wise manner. In
this paper, we propose a novel building block for CNNs, namely Res2Net, by
constructing hierarchical residual-like connections within one single residual
block. The Res2Net represents multi-scale features at a granular level and
increases the range of receptive fields for each network layer. The proposed
Res2Net block can be plugged into the state-of-the-art backbone CNN models,
e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these
models and demonstrate consistent performance gains over baseline models on
widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies
and experimental results on representative computer vision tasks, i.e., object
detection, class activation mapping, and salient object detection, further
verify the superiority of the Res2Net over the state-of-the-art baseline
methods. The source code and trained models are available on
https://mmcheng.net/res2net/.Comment: 11 pages, 7 figure
Harvesting Information from Captions for Weakly Supervised Semantic Segmentation
Since acquiring pixel-wise annotations for training convolutional neural
networks for semantic image segmentation is time-consuming, weakly supervised
approaches that only require class tags have been proposed. In this work, we
propose another form of supervision, namely image captions as they can be found
on the Internet. These captions have two advantages. They do not require
additional curation as it is the case for the clean class tags used by current
weakly supervised approaches and they provide textual context for the classes
present in an image. To leverage such textual context, we deploy a multi-modal
network that learns a joint embedding of the visual representation of the image
and the textual representation of the caption. The network estimates text
activation maps (TAMs) for class names as well as compound concepts, i.e.
combinations of nouns and their attributes. The TAMs of compound concepts
describing classes of interest substantially improve the quality of the
estimated class activation maps which are then used to train a network for
semantic segmentation. We evaluate our method on the COCO dataset where it
achieves state of the art results for weakly supervised image segmentation
Camouflaged Object Detection with Feature Grafting and Distractor Aware
The task of Camouflaged Object Detection (COD) aims to accurately segment
camouflaged objects that integrated into the environment, which is more
challenging than ordinary detection as the texture between the target and
background is visually indistinguishable. In this paper, we proposed a novel
Feature Grafting and Distractor Aware network (FDNet) to handle the COD task.
Specifically, we use CNN and Transformer to encode multi-scale images in
parallel. In order to better explore the advantages of the two encoders, we
design a cross-attention-based Feature Grafting Module to graft features
extracted from Transformer branch into CNN branch, after which the features are
aggregated in the Feature Fusion Module. A Distractor Aware Module is designed
to explicitly model the two possible distractors in the COD task to refine the
coarse camouflage map. We also proposed the largest artificial camouflaged
object dataset which contains 2000 images with annotations, named ACOD2K. We
conducted extensive experiments on four widely used benchmark datasets and the
ACOD2K dataset. The results show that our method significantly outperforms
other state-of-the-art methods. The code and the ACOD2K will be available at
https://github.com/syxvision/FDNet.Comment: ICME2023 pape
- …