2,708 research outputs found
Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps
Hyperspectral cameras can provide unique spectral signatures for consistently
distinguishing materials that can be used to solve surveillance tasks. In this
paper, we propose a novel real-time hyperspectral likelihood maps-aided
tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving
object tracking system generally consists of registration, object detection,
and tracking modules. We focus on the target detection part and remove the
necessity to build any offline classifiers and tune a large amount of
hyperparameters, instead learning a generative target model in an online manner
for hyperspectral channels ranging from visible to infrared wavelengths. The
key idea is that, our adaptive fusion method can combine likelihood maps from
multiple bands of hyperspectral imagery into one single more distinctive
representation increasing the margin between mean value of foreground and
background pixels in the fused map. Experimental results show that the HLT not
only outperforms all established fusion methods but is on par with the current
state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and
Pattern Recognition Workshops, 201
Gaussian mixture model classifiers for detection and tracking in UAV video streams.
Masters Degree. University of KwaZulu-Natal, Durban.Manual visual surveillance systems are subject to a high degree of human-error and operator fatigue. The automation of such systems often employs detectors, trackers and classifiers as fundamental building blocks. Detection, tracking and classification are especially useful and challenging in Unmanned Aerial Vehicle (UAV) based surveillance systems. Previous solutions have addressed challenges via complex classification methods. This dissertation proposes less complex Gaussian Mixture Model (GMM) based classifiers that can simplify the process; where data is represented as a reduced set of model parameters, and classification is performed in the low dimensionality parameter-space. The specification and adoption of GMM based classifiers on the UAV visual tracking feature space formed the principal contribution of the work. This methodology can be generalised to other feature spaces.
This dissertation presents two main contributions in the form of submissions to ISI accredited journals. In the first paper, objectives are demonstrated with a vehicle detector incorporating a two stage GMM classifier, applied to a single feature space, namely Histogram of Oriented Gradients (HoG). While the second paper demonstrates objectives with a vehicle tracker using colour histograms (in RGB and HSV), with Gaussian Mixture Model (GMM) classifiers and a Kalman filter.
The proposed works are comparable to related works with testing performed on benchmark datasets. In the tracking domain for such platforms, tracking alone is insufficient. Adaptive detection and classification can assist in search space reduction, building of knowledge priors and improved target representations. Results show that the proposed approach improves performance and robustness. Findings also indicate potential further enhancements such as a multi-mode tracker with global and local tracking based on a combination of both papers
Robust Tracking in Aerial Imagery Based on an Ego-Motion Bayesian Model
A novel strategy for object tracking in aerial imagery is presented, which is able to deal with complex situations where the camera ego-motion cannot be reliably estimated due to the aperture problem (related to low structured scenes), the strong ego-motion, and/or the presence of independent moving objects. The proposed algorithm is based on a complex modeling of the dynamic information, which simulates both the object and the camera dynamics to predict the putative object locations. In this model, the camera dynamics is probabilistically formulated as a weighted set of affine transformations that represent possible camera ego-motions. This dynamic model is used in a Particle Filter framework to distinguish the actual object location among the multiple candidates, that result from complex cluttered backgrounds, and the presence of several moving objects. The proposed strategy has been tested with the aerial FLIR AMCOM dataset, and its performance has been also compared with other tracking techniques to demonstrate its efficiency
An Appearance-Based Tracking Algorithm for Aerial Search and Rescue Purposes
The automation of the Wilderness Search and Rescue (WiSAR) task aims for high levels of understanding of various scenery. In addition, working in unfriendly and complex environments may cause a time delay in the operation and consequently put human lives at stake. In order to
address this problem, Unmanned Aerial Vehicles (UAVs), which provide potential support to the
conventional methods, are used. These vehicles are provided with reliable human detection and
tracking algorithms; in order to be able to find and track the bodies of the victims in complex
environments, and a robust control system to maintain safe distances from the detected bodies.
In this paper, a human detection based on the color and depth data captured from onboard sensors
is proposed. Moreover, the proposal of computing data association from the skeleton pose and a
visual appearance measurement allows the tracking of multiple people with invariance to the scale,
translation and rotation of the point of view with respect to the target objects. The system has been
validated with real and simulation experiments, and the obtained results show the ability to track
multiple individuals even after long-term disappearances. Furthermore, the simulations present the
robustness of the implemented reactive control system as a promising tool for assisting the pilot to
perform approaching maneuvers in a safe and smooth manner.This research is supported by Madrid Community project SEGVAUTO 4.0 P2018/EMT-4362)
and by the Spanish Government CICYT projects (TRA2015-63708-R and TRA2016-78886-C3-1-R), and Ministerio
de Educación, Cultura y Deporte para la Formación de Profesorado Universitario (FPU14/02143). Also,
we gratefully acknowledge the support of the NVIDIA Corporation with the donation of the GPUs used for
this research
- …