102,890 research outputs found
Feature Selective Networks for Object Detection
Objects for detection usually have distinct characteristics in different
sub-regions and different aspect ratios. However, in prevalent two-stage object
detection methods, Region-of-Interest (RoI) features are extracted by RoI
pooling with little emphasis on these translation-variant feature components.
We present feature selective networks to reform the feature representations of
RoIs by exploiting their disparities among sub-regions and aspect ratios. Our
network produces the sub-region attention bank and aspect ratio attention bank
for the whole image. The RoI-based sub-region attention map and aspect ratio
attention map are selectively pooled from the banks, and then used to refine
the original RoI features for RoI classification. Equipped with a light-weight
detection subnetwork, our network gets a consistent boost in detection
performance based on general ConvNet backbones (ResNet-101, GoogLeNet and
VGG-16). Without bells and whistles, our detectors equipped with ResNet-101
achieve more than 3% mAP improvement compared to counterparts on PASCAL VOC
2007, PASCAL VOC 2012 and MS COCO datasets
Plug-in, Trainable Gate for Streamlining Arbitrary Neural Networks
Architecture optimization, which is a technique for finding an efficient
neural network that meets certain requirements, generally reduces to a set of
multiple-choice selection problems among alternative sub-structures or
parameters. The discrete nature of the selection problem, however, makes this
optimization difficult. To tackle this problem we introduce a novel concept of
a trainable gate function. The trainable gate function, which confers a
differentiable property to discretevalued variables, allows us to directly
optimize loss functions that include non-differentiable discrete values such as
0-1 selection. The proposed trainable gate can be applied to pruning. Pruning
can be carried out simply by appending the proposed trainable gate functions to
each intermediate output tensor followed by fine-tuning the overall model,
using any gradient-based training methods. So the proposed method can jointly
optimize the selection of the pruned channels while fine-tuning the weights of
the pruned model at the same time. Our experimental results demonstrate that
the proposed method efficiently optimizes arbitrary neural networks in various
tasks such as image classification, style transfer, optical flow estimation,
and neural machine translation.Comment: Accepted to AAAI 2020 (Poster
- …