15,532 research outputs found
Accuracy Booster: Performance Boosting using Feature Map Re-calibration
Convolution Neural Networks (CNN) have been extremely successful in solving
intensive computer vision tasks. The convolutional filters used in CNNs have
played a major role in this success, by extracting useful features from the
inputs. Recently researchers have tried to boost the performance of CNNs by
re-calibrating the feature maps produced by these filters, e.g.,
Squeeze-and-Excitation Networks (SENets). These approaches have achieved better
performance by Exciting up the important channels or feature maps while
diminishing the rest. However, in the process, architectural complexity has
increased. We propose an architectural block that introduces much lower
complexity than the existing methods of CNN performance boosting while
performing significantly better than them. We carry out experiments on the
CIFAR, ImageNet and MS-COCO datasets, and show that the proposed block can
challenge the state-of-the-art results. Our method boosts the ResNet-50
architecture to perform comparably to the ResNet-152 architecture, which is a
three times deeper network, on classification. We also show experimentally that
our method is not limited to classification but also generalizes well to other
tasks such as object detection.Comment: IEEE Winter Conference on Applications of Computer Vision (WACV),
202
Mobile Video Object Detection with Temporally-Aware Feature Maps
This paper introduces an online model for object detection in videos designed
to run in real-time on low-powered mobile and embedded devices. Our approach
combines fast single-image object detection with convolutional long short term
memory (LSTM) layers to create an interweaved recurrent-convolutional
architecture. Additionally, we propose an efficient Bottleneck-LSTM layer that
significantly reduces computational cost compared to regular LSTMs. Our network
achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate
feature maps across frames. This approach is substantially faster than existing
detection methods in video, outperforming the fastest single-frame models in
model size and computational cost while attaining accuracy comparable to much
more expensive single-frame models on the Imagenet VID 2015 dataset. Our model
reaches a real-time inference speed of up to 15 FPS on a mobile CPU.Comment: In CVPR 201
- …