10,405 research outputs found
A novel infrared video surveillance system using deep learning based techniques
This is the author accepted manuscript. The final version is available from Springer via the DOI in this record.This paper presents a new, practical infrared video based surveillance
system, consisting of a resolution-enhanced, automatic target detection/recognition
(ATD/R) system that is widely applicable in civilian and military applications. To
deal with the issue of small numbers of pixel on target in the developed ATD/R
system, as are encountered in long range imagery, a super-resolution method is
employed to increase target signature resolution and optimise the baseline quality
of inputs for object recognition. To tackle the challenge of detecting extremely
low-resolution targets, we train a sophisticated and powerful convolutional neural
network (CNN) based faster-RCNN using long wave infrared imagery datasets
that were prepared and marked in-house. The system was tested under different
weather conditions, using two datasets featuring target types comprising pedestrians
and 6 different types of ground vehicles. The developed ATD/R system can
detect extremely low-resolution targets with superior performance by effectively
addressing the low small number of pixels on target, encountered in long range applications.
A comparison with traditional methods confirms this superiority both
qualitatively and quantitativelyThis work was funded by Thales UK, the Centre of Excellence for
Sensor and Imaging System (CENSIS), and the Scottish Funding Council under the project
“AALART. Thales-Challenge Low-pixel Automatic Target Detection and Recognition (ATD/ATR)”,
ref. CAF-0036. Thanks are also given to the Digital Health and Care Institute (DHI, project
Smartcough-MacMasters), which partially supported Mr. Monge-Alvarez’s contribution, and
to the Royal Society of Edinburgh and National Science Foundation of China for the funding
associated to the project “Flood Detection and Monitoring using Hyperspectral Remote Sensing
from Unmanned Aerial Vehicles”, which partially covered Dr. Casaseca-de-la-Higuera’s,
Dr. Luo’s, and Prof. Wang’s contribution. Dr. Casaseca-de-la-Higuera would also like to acknowledge
the Royal Society of Edinburgh for the funding associated to project “HIVE”
Box-level Segmentation Supervised Deep Neural Networks for Accurate and Real-time Multispectral Pedestrian Detection
Effective fusion of complementary information captured by multi-modal sensors
(visible and infrared cameras) enables robust pedestrian detection under
various surveillance situations (e.g. daytime and nighttime). In this paper, we
present a novel box-level segmentation supervised learning framework for
accurate and real-time multispectral pedestrian detection by incorporating
features extracted in visible and infrared channels. Specifically, our method
takes pairs of aligned visible and infrared images with easily obtained
bounding box annotations as input and estimates accurate prediction maps to
highlight the existence of pedestrians. It offers two major advantages over the
existing anchor box based multispectral detection methods. Firstly, it
overcomes the hyperparameter setting problem occurred during the training phase
of anchor box based detectors and can obtain more accurate detection results,
especially for small and occluded pedestrian instances. Secondly, it is capable
of generating accurate detection results using small-size input images, leading
to improvement of computational efficiency for real-time autonomous driving
applications. Experimental results on KAIST multispectral dataset show that our
proposed method outperforms state-of-the-art approaches in terms of both
accuracy and speed
Unmanned Aerial Systems for Wildland and Forest Fires
Wildfires represent an important natural risk causing economic losses, human
death and important environmental damage. In recent years, we witness an
increase in fire intensity and frequency. Research has been conducted towards
the development of dedicated solutions for wildland and forest fire assistance
and fighting. Systems were proposed for the remote detection and tracking of
fires. These systems have shown improvements in the area of efficient data
collection and fire characterization within small scale environments. However,
wildfires cover large areas making some of the proposed ground-based systems
unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial
Systems (UAS) were proposed. UAS have proven to be useful due to their
maneuverability, allowing for the implementation of remote sensing, allocation
strategies and task planning. They can provide a low-cost alternative for the
prevention, detection and real-time support of firefighting. In this paper we
review previous work related to the use of UAS in wildfires. Onboard sensor
instruments, fire perception algorithms and coordination strategies are
considered. In addition, we present some of the recent frameworks proposing the
use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more
efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at:
https://doi.org/10.3390/drones501001
Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps
Hyperspectral cameras can provide unique spectral signatures for consistently
distinguishing materials that can be used to solve surveillance tasks. In this
paper, we propose a novel real-time hyperspectral likelihood maps-aided
tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving
object tracking system generally consists of registration, object detection,
and tracking modules. We focus on the target detection part and remove the
necessity to build any offline classifiers and tune a large amount of
hyperparameters, instead learning a generative target model in an online manner
for hyperspectral channels ranging from visible to infrared wavelengths. The
key idea is that, our adaptive fusion method can combine likelihood maps from
multiple bands of hyperspectral imagery into one single more distinctive
representation increasing the margin between mean value of foreground and
background pixels in the fused map. Experimental results show that the HLT not
only outperforms all established fusion methods but is on par with the current
state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and
Pattern Recognition Workshops, 201
- …