141 research outputs found
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in
computer vision, has received great attention in recent years. Its development
in the past two decades can be regarded as an epitome of computer vision
history. If we think of today's object detection as a technical aesthetics
under the power of deep learning, then turning back the clock 20 years we would
witness the wisdom of cold weapon era. This paper extensively reviews 400+
papers of object detection in the light of its technical evolution, spanning
over a quarter-century's time (from the 1990s to 2019). A number of topics have
been covered in this paper, including the milestone detectors in history,
detection datasets, metrics, fundamental building blocks of the detection
system, speed up techniques, and the recent state of the art detection methods.
This paper also reviews some important detection applications, such as
pedestrian detection, face detection, text detection, etc, and makes an in-deep
analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible
publicatio
UDP-YOLO: High Efficiency and Real-Time Performance of Autonomous Driving Technology
In recent years, autonomous driving technology has gradually appeared in our field of vision. It senses the surrounding environment by using radar, laser, ultrasound, GPS, computer vision and other technologies, and then identifies obstacles and various signboards, and plans a suitable path to control the driving of vehicles. However, some problems occur when this technology is applied in foggy environment, such as the low probability of recognizing objects, or the fact that some objects cannot be recognized because the fog's fuzzy degree makes the planned path wrong. In view of this defect, and considering that automatic driving technology needs to respond quickly to objects when driving, this paper extends the prior defogging algorithm of dark channel, and proposes UDP-YOLO network to apply it to automatic driving technology. This paper is mainly divided into two parts: 1. Image processing: firstly, the data set is discriminated whether there is fog or not, then the fogged data set is defogged by defogging algorithm, and finally, the defogged data set is subjected to adaptive brightness enhancement; 2. Target detection: UDP-YOLO network proposed in this paper is used to detect the defogged data set. Through the observation results, it is found that the performance of the model proposed in this paper has been greatly improved while balancing the speed
Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network
Scene text detection, an important step of scene text reading systems, has
witnessed rapid development with convolutional neural networks. Nonetheless,
two main challenges still exist and hamper its deployment to real-world
applications. The first problem is the trade-off between speed and accuracy.
The second one is to model the arbitrary-shaped text instance. Recently, some
methods have been proposed to tackle arbitrary-shaped text detection, but they
rarely take the speed of the entire pipeline into consideration, which may fall
short in practical applications.In this paper, we propose an efficient and
accurate arbitrary-shaped text detector, termed Pixel Aggregation Network
(PAN), which is equipped with a low computational-cost segmentation head and a
learnable post-processing. More specifically, the segmentation head is made up
of Feature Pyramid Enhancement Module (FPEM) and Feature Fusion Module (FFM).
FPEM is a cascadable U-shaped module, which can introduce multi-level
information to guide the better segmentation. FFM can gather the features given
by the FPEMs of different depths into a final feature for segmentation. The
learnable post-processing is implemented by Pixel Aggregation (PA), which can
precisely aggregate text pixels by predicted similarity vectors. Experiments on
several standard benchmarks validate the superiority of the proposed PAN. It is
worth noting that our method can achieve a competitive F-measure of 79.9% at
84.2 FPS on CTW1500.Comment: Accept by ICCV 201
Synthetic Aperture Radar (SAR) Meets Deep Learning
This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
Precise Single-stage Detector
There are still two problems in SDD causing some inaccurate results: (1) In
the process of feature extraction, with the layer-by-layer acquisition of
semantic information, local information is gradually lost, resulting into less
representative feature maps; (2) During the Non-Maximum Suppression (NMS)
algorithm due to inconsistency in classification and regression tasks, the
classification confidence and predicted detection position cannot accurately
indicate the position of the prediction boxes. Methods: In order to address
these aforementioned issues, we propose a new architecture, a modified version
of Single Shot Multibox Detector (SSD), named Precise Single Stage Detector
(PSSD). Firstly, we improve the features by adding extra layers to SSD.
Secondly, we construct a simple and effective feature enhancement module to
expand the receptive field step by step for each layer and enhance its local
and semantic information. Finally, we design a more efficient loss function to
predict the IOU between the prediction boxes and ground truth boxes, and the
threshold IOU guides classification training and attenuates the scores, which
are used by the NMS algorithm. Main Results: Benefiting from the above
optimization, the proposed model PSSD achieves exciting performance in
real-time. Specifically, with the hardware of Titan Xp and the input size of
320 pix, PSSD achieves 33.8 mAP at 45 FPS speed on MS COCO benchmark and 81.28
mAP at 66 FPS speed on Pascal VOC 2007 outperforming state-of-the-art object
detection models. Besides, the proposed model performs significantly well with
larger input size. Under 512 pix, PSSD can obtain 37.2 mAP with 27 FPS on MS
COCO and 82.82 mAP with 40 FPS on Pascal VOC 2007. The experiment results prove
that the proposed model has a better trade-off between speed and accuracy.Comment: We will submit it soon to the IEEE transaction. Due to characters
limitation, we can not upload the full abstract. Please read the pdf file for
more detai
Towards Large-Scale Small Object Detection: Survey and Benchmarks
With the rise of deep convolutional neural networks, object detection has
achieved prominent advances in past years. However, such prosperity could not
camouflage the unsatisfactory situation of Small Object Detection (SOD), one of
the notoriously challenging tasks in computer vision, owing to the poor visual
appearance and noisy representation caused by the intrinsic structure of small
targets. In addition, large-scale dataset for benchmarking small object
detection methods remains a bottleneck. In this paper, we first conduct a
thorough review of small object detection. Then, to catalyze the development of
SOD, we construct two large-scale Small Object Detection dAtasets (SODA),
SODA-D and SODA-A, which focus on the Driving and Aerial scenarios
respectively. SODA-D includes 24828 high-quality traffic images and 278433
instances of nine categories. For SODA-A, we harvest 2513 high resolution
aerial images and annotate 872069 instances over nine classes. The proposed
datasets, as we know, are the first-ever attempt to large-scale benchmarks with
a vast collection of exhaustively annotated instances tailored for
multi-category SOD. Finally, we evaluate the performance of mainstream methods
on SODA. We expect the released benchmarks could facilitate the development of
SOD and spawn more breakthroughs in this field. Datasets and codes are
available at: \url{https://shaunyuan22.github.io/SODA}
Deep Learning-Based Object Detection in Maritime Unmanned Aerial Vehicle Imagery: Review and Experimental Comparisons
With the advancement of maritime unmanned aerial vehicles (UAVs) and deep
learning technologies, the application of UAV-based object detection has become
increasingly significant in the fields of maritime industry and ocean
engineering. Endowed with intelligent sensing capabilities, the maritime UAVs
enable effective and efficient maritime surveillance. To further promote the
development of maritime UAV-based object detection, this paper provides a
comprehensive review of challenges, relative methods, and UAV aerial datasets.
Specifically, in this work, we first briefly summarize four challenges for
object detection on maritime UAVs, i.e., object feature diversity, device
limitation, maritime environment variability, and dataset scarcity. We then
focus on computational methods to improve maritime UAV-based object detection
performance in terms of scale-aware, small object detection, view-aware,
rotated object detection, lightweight methods, and others. Next, we review the
UAV aerial image/video datasets and propose a maritime UAV aerial dataset named
MS2ship for ship detection. Furthermore, we conduct a series of experiments to
present the performance evaluation and robustness analysis of object detection
methods on maritime datasets. Eventually, we give the discussion and outlook on
future works for maritime UAV-based object detection. The MS2ship dataset is
available at
\href{https://github.com/zcj234/MS2ship}{https://github.com/zcj234/MS2ship}.Comment: 32 pages, 18 figure
- …