3,066 research outputs found
Unmanned Aerial Systems for Wildland and Forest Fires
Wildfires represent an important natural risk causing economic losses, human
death and important environmental damage. In recent years, we witness an
increase in fire intensity and frequency. Research has been conducted towards
the development of dedicated solutions for wildland and forest fire assistance
and fighting. Systems were proposed for the remote detection and tracking of
fires. These systems have shown improvements in the area of efficient data
collection and fire characterization within small scale environments. However,
wildfires cover large areas making some of the proposed ground-based systems
unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial
Systems (UAS) were proposed. UAS have proven to be useful due to their
maneuverability, allowing for the implementation of remote sensing, allocation
strategies and task planning. They can provide a low-cost alternative for the
prevention, detection and real-time support of firefighting. In this paper we
review previous work related to the use of UAS in wildfires. Onboard sensor
instruments, fire perception algorithms and coordination strategies are
considered. In addition, we present some of the recent frameworks proposing the
use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more
efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at:
https://doi.org/10.3390/drones501001
Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation around Non-Cooperative Targets
Autonomous navigation and path-planning around non-cooperative space objects
is an enabling technology for on-orbit servicing and space debris removal
systems. The navigation task includes the determination of target object
motion, the identification of target object features suitable for grasping, and
the identification of collision hazards and other keep-out zones. Given this
knowledge, chaser spacecraft can be guided towards capture locations without
damaging the target object or without unduly the operations of a servicing
target by covering up solar arrays or communication antennas. One way to
autonomously achieve target identification, characterization and feature
recognition is by use of artificial intelligence algorithms. This paper
discusses how the combination of cameras and machine learning algorithms can
achieve the relative navigation task. The performance of two deep
learning-based object detection algorithms, Faster Region-based Convolutional
Neural Networks (R-CNN) and You Only Look Once (YOLOv5), is tested using
experimental data obtained in formation flight simulations in the ORION Lab at
Florida Institute of Technology. The simulation scenarios vary the yaw motion
of the target object, the chaser approach trajectory, and the lighting
conditions in order to test the algorithms in a wide range of realistic and
performance limiting situations. The data analyzed include the mean average
precision metrics in order to compare the performance of the object detectors.
The paper discusses the path to implementing the feature recognition algorithms
and towards integrating them into the spacecraft Guidance Navigation and
Control system.Comment: 12 pages, 10 figures, 9 tables, IEEE Aerospace Conference 202
Joint-SRVDNet: Joint Super Resolution and Vehicle Detection Network
In many domestic and military applications, aerial vehicle detection and
super-resolutionalgorithms are frequently developed and applied independently.
However, aerial vehicle detection on super-resolved images remains a
challenging task due to the lack of discriminative information in the
super-resolved images. To address this problem, we propose a Joint
Super-Resolution and Vehicle DetectionNetwork (Joint-SRVDNet) that tries to
generate discriminative, high-resolution images of vehicles fromlow-resolution
aerial images. First, aerial images are up-scaled by a factor of 4x using a
Multi-scaleGenerative Adversarial Network (MsGAN), which has multiple
intermediate outputs with increasingresolutions. Second, a detector is trained
on super-resolved images that are upscaled by factor 4x usingMsGAN architecture
and finally, the detection loss is minimized jointly with the super-resolution
loss toencourage the target detector to be sensitive to the subsequent
super-resolution training. The network jointlylearns hierarchical and
discriminative features of targets and produces optimal super-resolution
results. Weperform both quantitative and qualitative evaluation of our proposed
network on VEDAI, xView and DOTAdatasets. The experimental results show that
our proposed framework achieves better visual quality than thestate-of-the-art
methods for aerial super-resolution with 4x up-scaling factor and improves the
accuracy ofaerial vehicle detection
Design of a Quadcopter for Search and Rescue Operation in Natural Calamities
In the last few decades there have been a lot of climatic changes because of human interference into the nature way of working. Because of this there have arised a lot of climatic as well as weather changes in the world’s climatic atmosphere. Whatever the reason may be for this, but the immediate solution for this is highly necessary which lies in ourselves. The proposed solution from my side is to save the lives of the people as well as their property from being damaged and hampered because of this ignorance caused by us till now. Unmanned vehicles can be of lot help in this regard, which can fly at high altitudes and can catch the images of the areas that are under natural problems and can provide immediate help to them as necessary. The copters can send wireless message to the sites of control which handle such delicate issues and can thus provide help to the much needy one
An Approach Of Features Extraction And Heatmaps Generation Based Upon Cnns And 3D Object Models
The rapid advancements in artificial intelligence have enabled recent progress of self-driving vehicles. However, the dependence on 3D object models and their annotations collected and owned by individual companies has become a major problem for the development of new algorithms. This thesis proposes an approach of directly using graphics models created from open-source datasets as the virtual representation of real-world objects. This approach uses Machine Learning techniques to extract 3D feature points and to create annotations from graphics models for the recognition of dynamic objects, such as cars, and for the verification of stationary and variable objects, such as buildings and trees. Moreover, it generates heat maps for the elimination of stationary/variable objects in real-time images before working on the recognition of dynamic objects. The proposed approach helps to bridge the gap between the virtual and physical worlds and to facilitate the development of new algorithms for self-driving vehicles
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in
computer vision, has received great attention in recent years. Its development
in the past two decades can be regarded as an epitome of computer vision
history. If we think of today's object detection as a technical aesthetics
under the power of deep learning, then turning back the clock 20 years we would
witness the wisdom of cold weapon era. This paper extensively reviews 400+
papers of object detection in the light of its technical evolution, spanning
over a quarter-century's time (from the 1990s to 2019). A number of topics have
been covered in this paper, including the milestone detectors in history,
detection datasets, metrics, fundamental building blocks of the detection
system, speed up techniques, and the recent state of the art detection methods.
This paper also reviews some important detection applications, such as
pedestrian detection, face detection, text detection, etc, and makes an in-deep
analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible
publicatio
Object Tracking Based on Satellite Videos: A Literature Review
Video satellites have recently become an attractive method of Earth observation, providing consecutive images of the Earth’s surface for continuous monitoring of specific events. The development of on-board optical and communication systems has enabled the various applications of satellite image sequences. However, satellite video-based target tracking is a challenging research topic in remote sensing due to its relatively low spatial and temporal resolution. Thus, this survey systematically investigates current satellite video-based tracking approaches and benchmark datasets, focusing on five typical tracking applications: traffic target tracking, ship tracking, typhoon tracking, fire tracking, and ice motion tracking. The essential aspects of each tracking target are summarized, such as the tracking architecture, the fundamental characteristics, primary motivations, and contributions. Furthermore, popular visual tracking benchmarks and their respective properties are discussed. Finally, a revised multi-level dataset based on WPAFB videos is generated and quantitatively evaluated for future development in the satellite video-based tracking area. In addition, 54.3% of the tracklets with lower Difficulty Score (DS) are selected and renamed as the Easy group, while 27.2% and 18.5% of the tracklets are grouped into the Medium-DS group and the Hard-DS group, respectively
- …