17,117 research outputs found
LR-CNN: Local-aware Region CNN for Vehicle Detection in Aerial Imagery
State-of-the-art object detection approaches such as Fast/Faster R-CNN, SSD,
or YOLO have difficulties detecting dense, small targets with arbitrary
orientation in large aerial images. The main reason is that using interpolation
to align RoI features can result in a lack of accuracy or even loss of location
information. We present the Local-aware Region Convolutional Neural Network
(LR-CNN), a novel two-stage approach for vehicle detection in aerial imagery.
We enhance translation invariance to detect dense vehicles and address the
boundary quantization issue amongst dense vehicles by aggregating the
high-precision RoIs' features. Moreover, we resample high-level semantic pooled
features, making them regain location information from the features of a
shallower convolutional block. This strengthens the local feature invariance
for the resampled features and enables detecting vehicles in an arbitrary
orientation. The local feature invariance enhances the learning ability of the
focal loss function, and the focal loss further helps to focus on the hard
examples. Taken together, our method better addresses the challenges of aerial
imagery. We evaluate our approach on several challenging datasets (VEDAI,
DOTA), demonstrating a significant improvement over state-of-the-art methods.
We demonstrate the good generalization ability of our approach on the DLR 3K
dataset.Comment: 8 page
Joint-SRVDNet: Joint Super Resolution and Vehicle Detection Network
In many domestic and military applications, aerial vehicle detection and
super-resolutionalgorithms are frequently developed and applied independently.
However, aerial vehicle detection on super-resolved images remains a
challenging task due to the lack of discriminative information in the
super-resolved images. To address this problem, we propose a Joint
Super-Resolution and Vehicle DetectionNetwork (Joint-SRVDNet) that tries to
generate discriminative, high-resolution images of vehicles fromlow-resolution
aerial images. First, aerial images are up-scaled by a factor of 4x using a
Multi-scaleGenerative Adversarial Network (MsGAN), which has multiple
intermediate outputs with increasingresolutions. Second, a detector is trained
on super-resolved images that are upscaled by factor 4x usingMsGAN architecture
and finally, the detection loss is minimized jointly with the super-resolution
loss toencourage the target detector to be sensitive to the subsequent
super-resolution training. The network jointlylearns hierarchical and
discriminative features of targets and produces optimal super-resolution
results. Weperform both quantitative and qualitative evaluation of our proposed
network on VEDAI, xView and DOTAdatasets. The experimental results show that
our proposed framework achieves better visual quality than thestate-of-the-art
methods for aerial super-resolution with 4x up-scaling factor and improves the
accuracy ofaerial vehicle detection
Towards Multi-class Object Detection in Unconstrained Remote Sensing Imagery
Automatic multi-class object detection in remote sensing images in
unconstrained scenarios is of high interest for several applications including
traffic monitoring and disaster management. The huge variation in object scale,
orientation, category, and complex backgrounds, as well as the different camera
sensors pose great challenges for current algorithms. In this work, we propose
a new method consisting of a novel joint image cascade and feature pyramid
network with multi-size convolution kernels to extract multi-scale strong and
weak semantic features. These features are fed into rotation-based region
proposal and region of interest networks to produce object detections. Finally,
rotational non-maximum suppression is applied to remove redundant detections.
During training, we minimize joint horizontal and oriented bounding box loss
functions, as well as a novel loss that enforces oriented boxes to be
rectangular. Our method achieves 68.16% mAP on horizontal and 72.45% mAP on
oriented bounding box detection tasks on the challenging DOTA dataset,
outperforming all published methods by a large margin (+6% and +12% absolute
improvement, respectively). Furthermore, it generalizes to two other datasets,
NWPU VHR-10 and UCAS-AOD, and achieves competitive results with the baselines
even when trained on DOTA. Our method can be deployed in multi-class object
detection applications, regardless of the image and object scales and
orientations, making it a great choice for unconstrained aerial and satellite
imagery.Comment: ACCV 201
- …