6,493 research outputs found
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in
computer vision, has received great attention in recent years. Its development
in the past two decades can be regarded as an epitome of computer vision
history. If we think of today's object detection as a technical aesthetics
under the power of deep learning, then turning back the clock 20 years we would
witness the wisdom of cold weapon era. This paper extensively reviews 400+
papers of object detection in the light of its technical evolution, spanning
over a quarter-century's time (from the 1990s to 2019). A number of topics have
been covered in this paper, including the milestone detectors in history,
detection datasets, metrics, fundamental building blocks of the detection
system, speed up techniques, and the recent state of the art detection methods.
This paper also reviews some important detection applications, such as
pedestrian detection, face detection, text detection, etc, and makes an in-deep
analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible
publicatio
LR-CNN: Local-aware Region CNN for Vehicle Detection in Aerial Imagery
State-of-the-art object detection approaches such as Fast/Faster R-CNN, SSD,
or YOLO have difficulties detecting dense, small targets with arbitrary
orientation in large aerial images. The main reason is that using interpolation
to align RoI features can result in a lack of accuracy or even loss of location
information. We present the Local-aware Region Convolutional Neural Network
(LR-CNN), a novel two-stage approach for vehicle detection in aerial imagery.
We enhance translation invariance to detect dense vehicles and address the
boundary quantization issue amongst dense vehicles by aggregating the
high-precision RoIs' features. Moreover, we resample high-level semantic pooled
features, making them regain location information from the features of a
shallower convolutional block. This strengthens the local feature invariance
for the resampled features and enables detecting vehicles in an arbitrary
orientation. The local feature invariance enhances the learning ability of the
focal loss function, and the focal loss further helps to focus on the hard
examples. Taken together, our method better addresses the challenges of aerial
imagery. We evaluate our approach on several challenging datasets (VEDAI,
DOTA), demonstrating a significant improvement over state-of-the-art methods.
We demonstrate the good generalization ability of our approach on the DLR 3K
dataset.Comment: 8 page
M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network
Feature pyramids are widely exploited by both the state-of-the-art one-stage
object detectors (e.g., DSSD, RetinaNet, RefineDet) and the two-stage object
detectors (e.g., Mask R-CNN, DetNet) to alleviate the problem arising from
scale variation across object instances. Although these object detectors with
feature pyramids achieve encouraging results, they have some limitations due to
that they only simply construct the feature pyramid according to the inherent
multi-scale, pyramidal architecture of the backbones which are actually
designed for object classification task. Newly, in this work, we present a
method called Multi-Level Feature Pyramid Network (MLFPN) to construct more
effective feature pyramids for detecting objects of different scales. First, we
fuse multi-level features (i.e. multiple layers) extracted by backbone as the
base feature. Second, we feed the base feature into a block of alternating
joint Thinned U-shape Modules and Feature Fusion Modules and exploit the
decoder layers of each u-shape module as the features for detecting objects.
Finally, we gather up the decoder layers with equivalent scales (sizes) to
develop a feature pyramid for object detection, in which every feature map
consists of the layers (features) from multiple levels. To evaluate the
effectiveness of the proposed MLFPN, we design and train a powerful end-to-end
one-stage object detector we call M2Det by integrating it into the architecture
of SSD, which gets better detection performance than state-of-the-art one-stage
detectors. Specifically, on MS-COCO benchmark, M2Det achieves AP of 41.0 at
speed of 11.8 FPS with single-scale inference strategy and AP of 44.2 with
multi-scale inference strategy, which is the new state-of-the-art results among
one-stage detectors. The code will be made available on
\url{https://github.com/qijiezhao/M2Det.Comment: AAAI1
- …