295 research outputs found
FoundLoc: Vision-based Onboard Aerial Localization in the Wild
Robust and accurate localization for Unmanned Aerial Vehicles (UAVs) is an
essential capability to achieve autonomous, long-range flights. Current methods
either rely heavily on GNSS, face limitations in visual-based localization due
to appearance variances and stylistic dissimilarities between camera and
reference imagery, or operate under the assumption of a known initial pose. In
this paper, we developed a GNSS-denied localization approach for UAVs that
harnesses both Visual-Inertial Odometry (VIO) and Visual Place Recognition
(VPR) using a foundation model. This paper presents a novel vision-based
pipeline that works exclusively with a nadir-facing camera, an Inertial
Measurement Unit (IMU), and pre-existing satellite imagery for robust, accurate
localization in varied environments and conditions. Our system demonstrated
average localization accuracy within a -meter range, with a minimum error
below meter, under real-world conditions marked by drastic changes in
environmental appearance and with no assumption of the vehicle's initial pose.
The method is proven to be effective and robust, addressing the crucial need
for reliable UAV localization in GNSS-denied environments, while also being
computationally efficient enough to be deployed on resource-constrained
platforms
Long-range UAV Thermal Geo-localization with Satellite Imagery
Onboard sensors, such as cameras and thermal sensors, have emerged as
effective alternatives to Global Positioning System (GPS) for geo-localization
in Unmanned Aerial Vehicle (UAV) navigation. Since GPS can suffer from signal
loss and spoofing problems, researchers have explored camera-based techniques
such as Visual Geo-localization (VG) using satellite RGB imagery. Additionally,
thermal geo-localization (TG) has become crucial for long-range UAV flights in
low-illumination environments. This paper proposes a novel thermal
geo-localization framework using satellite RGB imagery, which includes multiple
domain adaptation methods to address the limited availability of paired thermal
and satellite images. The experimental results demonstrate the effectiveness of
the proposed approach in achieving reliable thermal geo-localization
performance, even in thermal images with indistinct self-similar features. We
evaluate our approach on real data collected onboard a UAV. We also release the
code and \textit{Boson-nighttime}, a dataset of paired satellite-thermal and
unpaired satellite images for thermal geo-localization with satellite imagery.
To the best of our knowledge, this work is the first to propose a thermal
geo-localization method using satellite RGB imagery in long-range flights.Comment: 8 pages, 6 figures, IROS 202
Real-Time Implementation of Vision-Aided Monocular Navigation for Small Fixed-Wing Unmanned Aerial Systems
The goal of this project was to develop and implement algorithms to demonstrate real-time positioning of a UAV using a monocular camera combined with previously collected orthorectified imagery. Unlike previous tests, this project did not utilize a full inertial navigation system (INS) for attitude, but instead had to rely on the attitude obtained by inexpensive commercial off-the-shelf (COTS) autopilots. The system consisted of primarily COTS components and open-source software, and was own over Camp Atterbury, IN for a sequence of flight tests in Fall 2015. The system obtained valid solutions over much of the flight path, identifying features in the flight image, matching those features with a database of features, and then solving both the 6DOF solution, and an attitude-aided 3DOF solution. The tests demonstrated that such attitude aiding is beneficial, since the horizontal DRMS of the 6DOF solution was 59m, whereas the 3DOF solution DRMS was 15m. Post processing was done to improve the algorithm to correct for system errors, obtaining a 3DOF solution DRMS of 8.22 meters. Overall, this project increased our understanding of the capabilities and limitations of real-time vision-aided navigation, and demonstrated that such navigation is possible on a relatively small platform with limited computational power
Map-Based Localization for Unmanned Aerial Vehicle Navigation
Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments.
Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments.
The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%
Vision-based localization methods under GPS-denied conditions
This paper reviews vision-based localization methods in GPS-denied
environments and classifies the mainstream methods into Relative Vision
Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss
the broad application of optical flow in feature extraction-based Visual
Odometry (VO) solutions and introduce advanced optical flow estimation methods.
For AVL, we review recent advances in Visual Simultaneous Localization and
Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman
Filter (EKF) based methods. We also introduce the application of offline map
registration and lane vision detection schemes to achieve Absolute Visual
Localization. This paper compares the performance and applications of
mainstream methods for visual localization and provides suggestions for future
studies.Comment: 32 pages, 15 figure
UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments
The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection
UAV-Enabled Surface and Subsurface Characterization for Post-Earthquake Geotechnical Reconnaissance
Major earthquakes continue to cause significant damage to infrastructure systems and the loss of life (e.g. 2016 Kaikoura, New Zealand; 2016 Muisne, Ecuador; 2015 Gorkha, Nepal). Following an earthquake, costly human-led reconnaissance studies are conducted to document structural or geotechnical damage and to collect perishable field data. Such efforts are faced with many daunting challenges including safety, resource limitations, and inaccessibility of sites. Unmanned Aerial Vehicles (UAV) represent a transformative tool for mitigating the effects of these challenges and generating spatially distributed and overall higher quality data compared to current manual approaches. UAVs enable multi-sensor data collection and offer a computational decision-making platform that could significantly influence post-earthquake reconnaissance approaches. As demonstrated in this research, UAVs can be used to document earthquake-affected geosystems by creating 3D geometric models of target sites, generate 2D and 3D imagery outputs to perform geomechanical assessments of exposed rock masses, and characterize subsurface field conditions using techniques such as in situ seismic surface wave testing. UAV-camera systems were used to collect images of geotechnical sites to model their 3D geometry using Structure-from-Motion (SfM). Key examples of lessons learned from applying UAV-based SfM to reconnaissance of earthquake-affected sites are presented. The results of 3D modeling and the input imagery were used to assess the mechanical properties of landslides and rock masses. An automatic and semi-automatic 2D fracture detection method was developed and integrated with a 3D, SfM, imaging framework. A UAV was then integrated with seismic surface wave testing to estimate the shear wave velocity of the subsurface materials, which is a critical input parameter in seismic response of geosystems. The UAV was outfitted with a payload release system to autonomously deliver an impulsive seismic source to the ground surface for multichannel analysis of surface waves (MASW) tests. The UAV was found to offer a mobile but higher-energy source than conventional seismic surface wave techniques and is the foundational component for developing the framework for fully-autonomous in situ shear wave velocity profiling.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145793/1/wwgreen_1.pd
- …