485 research outputs found
Review and classification of vision-based localisation techniques in unknown environments
International audienceThis study presents a review of the state-of-the-art and a novel classification of current vision-based localisation techniques in unknown environments. Indeed, because of progresses made in computer vision, it is now possible to consider vision-based systems as promising navigation means that can complement traditional navigation sensors like global navigation satellite systems (GNSSs) and inertial navigation systems. This study aims to review techniques employing a camera as a localisation sensor, provide a classification of techniques and introduce schemes that exploit the use of video information within a multi-sensor system. In fact, a general model is needed to better compare existing techniques in order to decide which approach is appropriate and which are the innovation axes. In addition, existing classifications only consider techniques based on vision as a standalone tool and do not consider video as a sensor among others. The focus is addressed to scenarios where no a priori knowledge of the environment is provided. In fact, these scenarios are the most challenging since the system has to cope with objects as they appear in the scene without any prior information about their expected position
Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition
The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future
UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments
The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection
A LiDAR-Inertial SLAM Tightly-Coupled with Dropout-Tolerant GNSS Fusion for Autonomous Mine Service Vehicles
Multi-modal sensor integration has become a crucial prerequisite for the
real-world navigation systems. Recent studies have reported successful
deployment of such system in many fields. However, it is still challenging for
navigation tasks in mine scenes due to satellite signal dropouts, degraded
perception, and observation degeneracy. To solve this problem, we propose a
LiDAR-inertial odometry method in this paper, utilizing both Kalman filter and
graph optimization. The front-end consists of multiple parallel running
LiDAR-inertial odometries, where the laser points, IMU, and wheel odometer
information are tightly fused in an error-state Kalman filter. Instead of the
commonly used feature points, we employ surface elements for registration. The
back-end construct a pose graph and jointly optimize the pose estimation
results from inertial, LiDAR odometry, and global navigation satellite system
(GNSS). Since the vehicle has a long operation time inside the tunnel, the
largely accumulated drift may be not fully by the GNSS measurements. We hereby
leverage a loop closure based re-initialization process to achieve full
alignment. In addition, the system robustness is improved through handling data
loss, stream consistency, and estimation error. The experimental results show
that our system has a good tolerance to the long-period degeneracy with the
cooperation different LiDARs and surfel registration, achieving meter-level
accuracy even for tens of minutes running during GNSS dropouts
Recommended from our members
Towards secure & robust PNT for automated systems
This dissertation makes four contributions in support of secure and robust position, navigation, and timing (PNT) for automated systems. The first two relate to PNT security while the latter two address robust positioning for automated ground vehicles.
The first contribution is a fundamental theory for provably-secure clock synchronization between two agents in a distributed automated system. All one-way synchronization protocols, such as those based on the Global Positioning System (GPS) and other Global Navigation Satellite Systems (GNSS), are shown to be vulnerable to man-in-the-middle delay attacks. This contribution is the first to identify the necessary and sufficient conditions for provably secure clock synchronization.
The second contribution, also related to PNT security, is a three-year study of the world-wide GPS interference landscape based on data from a dual-frequency GNSS receiver operating continuously on the International Space Station (ISS). This work is the first publicly-reported space-based survey of GNSS interference, and unveils previously-unreported GNSS interference activity.
The third contribution is a novel ground vehicle positioning technique that is robust to GNSS signal blockage, poor lighting conditions, and adverse weather events such as heavy rain and dense fog. The technique relies on sensors that are commonly available on automated vehicles and are insensitive to lighting and inclement weather: automotive radar, low-cost inertial measurement units (IMUs), and GNSS. Remarkably, it is shown that, given a prior radar map, the proposed technique operating on data from off-the-shelf all-weather automotive sensors can maintain sub-50-cm horizontal position accuracy during 60 min of GNSS-denied driving in downtown Austin, TX.
This dissertation’s final contribution is an analysis and demonstration of the feasibility of crowd-sourced digital mapping for automated vehicles. Localization techniques, such as the one described in the previous contribution, rely on such digital maps for accuracy and robustness. A key enabler for large-scale up-to-date maps is enlisting the help of the very consumer vehicles that need the map to build and update it. A method for fusing multi-session vision data into a unified digital map is developed. The asymptotic limit of such a map’s globally-referenced position accuracy is explored for the case in which the mapping agents rely on low-cost GNSS receivers performing standard code-phase-based navigation. Experimental validation along a semi-urban route shows that low-cost consumer vehicles incrementally tighten the accuracy of the jointly-optimized digital map over time enough to support sub-lane-level positioning in a global frame of reference.Electrical and Computer Engineerin
3D LiDAR Based SLAM System Evaluation with Low-Cost Real-Time Kinematics GPS Solution
Positioning mobile systems with high accuracy is a prerequisite for intelligent autonomous behavior, both in industrial environments and in field robotics. This paper describes the setup of a robotic platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. A configuration using a mobile robot Husky A200, and a LiDAR (light detection and ranging) sensor was used to implement the setup. For verification of the proposed setup, different scan matching methods for odometry determination in indoor and outdoor environments are tested. An assessment of the accuracy of the baseline 3D-SLAM system and the selected evaluation system is presented by comparing different scenarios and test situations. It was shown that the hdl_graph_slam in combination with the LiDAR OS1 and the scan matching algorithms FAST_GICP and FAST_VGICP achieves good mapping results with accuracies up to 2 cm
GNSS/Multi-Sensor Fusion Using Continuous-Time Factor Graph Optimization for Robust Localization
Accurate and robust vehicle localization in highly urbanized areas is
challenging. Sensors are often corrupted in those complicated and large-scale
environments. This paper introduces GNSS-FGO, an online and global trajectory
estimator that fuses GNSS observations alongside multiple sensor measurements
for robust vehicle localization. In GNSS-FGO, we fuse asynchronous sensor
measurements into the graph with a continuous-time trajectory representation
using Gaussian process regression. This enables querying states at arbitrary
timestamps so that sensor observations are fused without requiring strict state
and measurement synchronization. Thus, the proposed method presents a
generalized factor graph for multi-sensor fusion. To evaluate and study
different GNSS fusion strategies, we fuse GNSS measurements in loose and tight
coupling with a speed sensor, IMU, and lidar-odometry. We employed datasets
from measurement campaigns in Aachen, Duesseldorf, and Cologne in experimental
studies and presented comprehensive discussions on sensor observations,
smoother types, and hyperparameter tuning. Our results show that the proposed
approach enables robust trajectory estimation in dense urban areas, where the
classic multi-sensor fusion method fails due to sensor degradation. In a test
sequence containing a 17km route through Aachen, the proposed method results in
a mean 2D positioning error of 0.19m for loosely coupled GNSS fusion and 0.48m
while fusing raw GNSS observations with lidar odometry in tight coupling.Comment: Revision of arXiv:2211.0540
Map-Based Localization for Unmanned Aerial Vehicle Navigation
Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments.
Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments.
The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%
- …