672 research outputs found
Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps
Hyperspectral cameras can provide unique spectral signatures for consistently
distinguishing materials that can be used to solve surveillance tasks. In this
paper, we propose a novel real-time hyperspectral likelihood maps-aided
tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving
object tracking system generally consists of registration, object detection,
and tracking modules. We focus on the target detection part and remove the
necessity to build any offline classifiers and tune a large amount of
hyperparameters, instead learning a generative target model in an online manner
for hyperspectral channels ranging from visible to infrared wavelengths. The
key idea is that, our adaptive fusion method can combine likelihood maps from
multiple bands of hyperspectral imagery into one single more distinctive
representation increasing the margin between mean value of foreground and
background pixels in the fused map. Experimental results show that the HLT not
only outperforms all established fusion methods but is on par with the current
state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and
Pattern Recognition Workshops, 201
Using wireless sensors and networks program for chemical particle propagation mapping and chemical source localization
Chemical source localization is a challenge for most of researchers. It has extensive applications, such as anti-terrorist military, Gas and oil industry, and environment engineering. This dissertation used wireless sensor and sensor networks to get chemical particle propagation mapping and chemical source localization. First, the chemical particle propagation mapping is built using interpolation and extrapolation methods. The interpolation method get the chemical particle path through the sensors, and the extrapolation method get the chemical particle beyond the sensors. Both of them compose of the mapping in the whole considered area. Second, the algorithm of sensor fusion is proposed. It smooths the chemical particle paths through integration of more sensors\u27 value and updating the parameters. The updated parameters are associated with including sensor fusion among chemical sensors and wind sensors at same positions and sensor fusion among sensors at different positions. This algorithm improves the accuracy and efficiency of chemical particle mapping. Last, the reasoning system is implemented aiming to detect the chemical source in the considered region where the chemical particle propagation mapping has been finished. This control scheme dynamically analyzes the data from the sensors and guide us to find the goal. In this dissertation, the novel algorithm of modelling chemical propagation is programmed using Matlab. Comparing the results from computational fluid dynamics (CFD) software COMSOL, this algorithm have the same level of accuracy. However, it saves more computational times and memories. The simulation and experiment of detecting chemical source in an indoor environment and outdoor environment are finished in this dissertation --Abstract, page iii
Fast, Autonomous Flight in GPS-Denied and Cluttered Environments
One of the most challenging tasks for a flying robot is to autonomously
navigate between target locations quickly and reliably while avoiding obstacles
in its path, and with little to no a-priori knowledge of the operating
environment. This challenge is addressed in the present paper. We describe the
system design and software architecture of our proposed solution, and showcase
how all the distinct components can be integrated to enable smooth robot
operation. We provide critical insight on hardware and software component
selection and development, and present results from extensive experimental
testing in real-world warehouse environments. Experimental testing reveals that
our proposed solution can deliver fast and robust aerial robot autonomous
navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field
Robotic
Reducing Object Detection Uncertainty from RGB and Thermal Data for UAV Outdoor Surveillance
Recent advances in Unmanned Aerial Vehicles (UAVs) have resulted in their
quick adoption for wide a range of civilian applications, including precision
agriculture, biosecurity, disaster monitoring and surveillance. UAVs offer
low-cost platforms with flexible hardware configurations, as well as an
increasing number of autonomous capabilities, including take-off, landing,
object tracking and obstacle avoidance. However, little attention has been paid
to how UAVs deal with object detection uncertainties caused by false readings
from vision-based detectors, data noise, vibrations, and occlusion. In most
situations, the relevance and understanding of these detections are delegated
to human operators, as many UAVs have limited cognition power to interact
autonomously with the environment. This paper presents a framework for
autonomous navigation under uncertainty in outdoor scenarios for small UAVs
using a probabilistic-based motion planner. The framework is evaluated with
real flight tests using a sub 2 kg quadrotor UAV and illustrated in victim
finding Search and Rescue (SAR) case study in a forest/bushland. The navigation
problem is modelled using a Partially Observable Markov Decision Process
(POMDP), and solved in real time onboard the small UAV using Augmented Belief
Trees (ABT) and the TAPIR toolkit. Results from experiments using colour and
thermal imagery show that the proposed motion planner provides accurate victim
localisation coordinates, as the UAV has the flexibility to interact with the
environment and obtain clearer visualisations of any potential victims compared
to the baseline motion planner. Incorporating this system allows optimised UAV
surveillance operations by diminishing false positive readings from
vision-based object detectors
High-Performance Testbed for Vision-Aided Autonomous Navigation for Quadrotor UAVs in Cluttered Environments
This thesis presents the development of an aerial robotic testbed based on Robot Operating System (ROS). The purpose of this high-performance testbed is to develop a system capable of performing robust navigation tasks using vision tools such as a stereo camera. While ensuring the computation of robot odometery, the system is also capable of sensing the environment using the same stereo camera. Hence, all the navigation tasks are performed using a stereo camera and an inertial measurement unit (IMU) as the main sensor suite. ROS is used as a framework for software integration due to its capabilities to provide efficient communication and sensor interfaces. Moreover, it also allows us to use C++ which is efficient in performance especially on embedded platforms. Combining together ROS and C++ provides the necessary computation efficiency and tools to handle fast, real-time image processing and planning which are the vital parts of navigation and obstacle avoidance on such scale. The main application of this work revolves around proposing a real-time and efficient way to demonstrate vision-based navigation in UAVs. The proposed approach is developed for a quadrotor UAV which is capable of performing defensive maneuvers in case any obstacles are in its way, while constantly moving towards a user-defined final destination. Stereo depth computation adds a third axis to a two dimensional image coordinate frame. This can be referred to as the depth image space or depth image coordinate frame. The idea of planning in this frame of reference is utilized along with certain precomputed action primitives. The formulation of these action primitives leads to a hybrid control law for feasible trajectory generation. Further, a proof of stability of this system is also presented. The proposed approach keeps in view the fact that while performing fast maneuvers and obstacle avoidance simultaneously, many of the standard optimization approaches might not work in real-time on-board due to time and resource limitations. This leads to a need for the development of real-time techniques for vision-based autonomous navigation
- …