250 research outputs found
YOLO-Drone:Airborne real-time detection of dense small objects from high-altitude perspective
Unmanned Aerial Vehicles (UAVs), specifically drones equipped with remote
sensing object detection technology, have rapidly gained a broad spectrum of
applications and emerged as one of the primary research focuses in the field of
computer vision. Although UAV remote sensing systems have the ability to detect
various objects, small-scale objects can be challenging to detect reliably due
to factors such as object size, image degradation, and real-time limitations.
To tackle these issues, a real-time object detection algorithm (YOLO-Drone) is
proposed and applied to two new UAV platforms as well as a specific light
source (silicon-based golden LED). YOLO-Drone presents several novelties: 1)
including a new backbone Darknet59; 2) a new complex feature aggregation module
MSPP-FPN that incorporated one spatial pyramid pooling and three atrous spatial
pyramid pooling modules; 3) and the use of Generalized Intersection over Union
(GIoU) as the loss function. To evaluate performance, two benchmark datasets,
UAVDT and VisDrone, along with one homemade dataset acquired at night under
silicon-based golden LEDs, are utilized. The experimental results show that, in
both UAVDT and VisDrone, the proposed YOLO-Drone outperforms state-of-the-art
(SOTA) object detection methods by improving the mAP of 10.13% and 8.59%,
respectively. With regards to UAVDT, the YOLO-Drone exhibits both high
real-time inference speed of 53 FPS and a maximum mAP of 34.04%. Notably,
YOLO-Drone achieves high performance under the silicon-based golden LEDs, with
a mAP of up to 87.71%, surpassing the performance of YOLO series under ordinary
light sources. To conclude, the proposed YOLO-Drone is a highly effective
solution for object detection in UAV applications, particularly for night
detection tasks where silicon-based golden light LED technology exhibits
significant superiority
Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network
The detection performance of small objects in remote sensing images is not
satisfactory compared to large objects, especially in low-resolution and noisy
images. A generative adversarial network (GAN)-based model called enhanced
super-resolution GAN (ESRGAN) shows remarkable image enhancement performance,
but reconstructed images miss high-frequency edge information. Therefore,
object detection performance degrades for small objects on recovered noisy and
low-resolution remote sensing images. Inspired by the success of edge enhanced
GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN
(EESRGAN) to improve the image quality of remote sensing images and use
different detector networks in an end-to-end manner where detector loss is
backpropagated into the EESRGAN to improve the detection performance. We
propose an architecture with three components: ESRGAN, Edge Enhancement Network
(EEN), and Detection network. We use residual-in-residual dense blocks (RRDB)
for both the ESRGAN and EEN, and for the detector network, we use the faster
region-based convolutional network (FRCNN) (two-stage detector) and single-shot
multi-box detector (SSD) (one stage detector). Extensive experiments on a
public (car overhead with context) and a self-assembled (oil and gas storage
tank) satellite dataset show superior performance of our method compared to the
standalone state-of-the-art object detectors.Comment: This paper contains 27 pages and accepted for publication in MDPI
remote sensing journal. GitHub Repository:
https://github.com/Jakaria08/EESRGAN (Implementation
- …