152 research outputs found

    Survey on Recent Advances in Integrated GNSSs Towards Seamless Navigation Using Multi-Sensor Fusion Technology

    Get PDF
    During the past few decades, the presence of global navigation satellite systems (GNSSs) such as GPS, GLONASS, Beidou and Galileo has facilitated positioning, navigation and timing (PNT) for various outdoor applications. With the rapid increase in the number of orbiting satellites per GNSS, enhancements in the satellite-based augmentation systems (SBASs) such as EGNOS and WAAS, as well as commissioning new GNSS constellations, the PNT capabilities are maximized to reach new frontiers. Additionally, the recent developments in precise point positioning (PPP) and real time kinematic (RTK) algorithms have provided more feasibility to carrier-phase precision positioning solutions up to the third-dimensional localization. With the rapid growth of internet of things (IoT) applications, seamless navigation becomes very crucial for numerous PNT dependent applications especially in sensitive fields such as safety and industrial applications. Throughout the years, GNSSs have maintained sufficiently acceptable performance in PNT, in RTK and PPP applications however GNSS experienced major challenges in some complicated signal environments. In many scenarios, GNSS signal suffers deterioration due to multipath fading and attenuation in densely obscured environments that comprise stout obstructions. Recently, there has been a growing demand e.g. in the autonomous-things domain in adopting reliable systems that accurately estimate position, velocity and time (PVT) observables. Such demand in many applications also facilitates the retrieval of information about the six degrees of freedom (6-DOF - x, y, z, roll, pitch, and heading) movements of the target anchors. Numerous modern applications are regarded as beneficiaries of precise PNT solutions such as the unmanned aerial vehicles (UAV), the automatic guided vehicles (AGV) and the intelligent transportation system (ITS). Hence, multi-sensor fusion technology has become very vital in seamless navigation systems owing to its complementary capabilities to GNSSs. Fusion-based positioning in multi-sensor technology comprises the use of multiple sensors measurements for further refinement in addition to the primary GNSS, which results in high precision and less erroneous localization. Inertial navigation systems (INSs) and their inertial measurement units (IMUs) are the most commonly used technologies for augmenting GNSS in multi-sensor integrated systems. In this article, we survey the most recent literature on multi-sensor GNSS technology for seamless navigation. We provide an overall perspective for the advantages, the challenges and the recent developments of the fusion-based GNSS navigation realm as well as analyze the gap between scientific advances and commercial offerings. INS/GNSS and IMU/GNSS systems have proven to be very reliable in GNSS-denied environments where satellite signal degradation is at its peak, that is why both integrated systems are very abundant in the relevant literature. In addition, the light detection and ranging (LiDAR) systems are widely adopted in the literature for its capability to provide 6-DOF to mobile vehicles and autonomous robots. LiDARs are very accurate systems however they are not suitable for low-cost positioning due to the expensive initial costs. Moreover, several other techniques from the radio frequency (RF) spectrum are utilized as multi-sensor systems such as cellular networks, WiFi, ultra-wideband (UWB) and Bluetooth. The cellular-based systems are very suitable for outdoor navigation applications while WiFi-based, UWB-based and Bluetooth-based systems are efficient in indoor positioning systems (IPS). However, to achieve reliable PVT estimations in multi-sensor GNSS navigation, optimal algorithms should be developed to mitigate the estimation errors resulting from non-line-of-sight (NLOS) GNSS situations. Examples of the most commonly used algorithms for trilateration-based positioning are Kalman filters, weighted least square (WLS), particle filters (PF) and many other hybrid algorithms by mixing one or more algorithms together. In this paper, the reviewed articles under study and comparison are presented by highlighting their motivation, the methodology of implementation, the modelling utilized and the performed experiments. Then they are assessed with respect to the published results focusing on achieved accuracy, robustness and overall implementation cost-benefits as performance metrics. Our summarizing survey assesses the most promising, highly ranked and recent articles that comprise insights into the future of GNSS technology with multi-sensor fusion technique.©2021 The Authors. Published by ION.fi=vertaisarvioimaton|en=nonPeerReviewed

    A Study on UWB-Aided Localization for Multi-UAV Systems in GNSS-Denied Environments

    Get PDF
    Unmanned Aerial Vehicles (UAVs) have seen an increased penetration in industrial applications in recent years. Some of those applications have to be carried out in GNSS-denied environments. For this reason, several localization systems have emerged as an alternative to GNSS-based systems such as Lidar and Visual Odometry, Inertial Measurement Units (IMUs), and over the past years also UWB-based systems. UWB technology has increased its popularity in the robotics field due to its high accuracy distance estimation from ranging measurements of wireless signals, even in non-line-of-sight measurements. However, the applicability of most of the UWB-based localization systems is limited because they rely on a fixed set of nodes, named anchors, which requires prior calibration. In this thesis, we present a localization system based on UWB technology with a built-in collaborative algorithm for the online autocalibration of the anchors. This autocalibration method, enables the anchors to be movable and thus, to be used in ad-doc and dynamic deployments. The system is based on Decawave's DWM1001 UWB transceivers. Compared to Decawave's autopositioning algorithm we drastically reduce the calibration time while increasing accuracy. We provide both experimental measurements and simulation results to demonstrate the usability of this algorithm. We also present a comparison between our UWB-based and other non-GNSS localization systems for UAVs positioning in indoor environments

    Introducing autonomous aerial robots in industrial manufacturing

    Get PDF
    Although ground robots have been successfully used for many years in manufacturing, the capability of aerial robots to agilely navigate in the often sparse and static upper part of factories makes them suitable for performing tasks of interest in many industrial sectors. This paper presents the design, development, and validation of a fully autonomous aerial robotic system for manufacturing industries. It includes modules for accurate pose estimation without using a Global Navigation Satellite System (GNSS), autonomous navigation, radio-based localization, and obstacle avoidance, among others, providing a fully onboard solution capable of autonomously performing complex tasks in dynamic indoor environments in which all necessary sensors, electronics, and processing are on the robot. It was developed to fulfill two use cases relevant in many industries: light object logistics and missing tool search. The presented robotic system, functionalities, and use cases have been extensively validated with Technology Readiness Level 7 (TRL-7) in the Centro Bahía de C´ adiz (CBC) Airbus D&S factory in fully working conditions.Comisión Europea 60884Horizonte 2020 (Unión Europea) 871479Plan Nacional de I+D+I DPI2017-8979-

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    Comparative Study of Indoor Navigation Systems for Autonomous Flight

    Get PDF
    Recently, Unmanned Aerial Vehicles (UAVs) have attracted the society and researchers due to the capability to perform in economic, scientific and emergency scenarios, and are being employed in large number of applications especially during the hostile environments. They can operate autonomously for both indoor and outdoor applications mainly including search and rescue, manufacturing, forest fire tracking, remote sensing etc. For both environments, precise localization plays a critical role in order to achieve high performance flight and interacting with the surrounding objects. However, for indoor areas with degraded or denied Global Navigation Satellite System (GNSS) situation, it becomes challenging to control UAV autonomously especially where obstacles are unidentified. A large number of techniques by using various technologies are proposed to get rid of these limits. This paper provides a comparison of such existing solutions and technologies available for this purpose with their strengths and limitations. Further, a summary of current research status with unresolved issues and opportunities is provided that would provide research directions to the researchers of the similar interests

    The simultaneous localization and mapping (SLAM):An overview

    Get PDF
    Positioning is a need for many applications related to mapping and navigation either in civilian or military domains. The significant developments in satellite-based techniques, sensors, telecommunications, computer hardware and software, image processing, etc. positively influenced to solve the positioning problem efficiently and instantaneously. Accordingly, the mentioned development empowered the applications and advancement of autonomous navigation. One of the most interesting developed positioning techniques is what is called in robotics as the Simultaneous Localization and Mapping SLAM. The SLAM problem solution has witnessed a quick improvement in the last decades either using active sensors like the RAdio Detection And Ranging (Radar) and Light Detection and Ranging (LiDAR) or passive sensors like cameras. Definitely, positioning and mapping is one of the main tasks for Geomatics engineers, and therefore it's of high importance for them to understand the SLAM topic which is not easy because of the huge documentation and algorithms available and the various SLAM solutions in terms of the mathematical models, complexity, the sensors used, and the type of applications. In this paper, a clear and simplified explanation is introduced about SLAM from a Geomatical viewpoint avoiding going into the complicated algorithmic details behind the presented techniques. In this way, a general overview of SLAM is presented showing the relationship between its different components and stages like the core part of the front-end and back-end and their relation to the SLAM paradigm. Furthermore, we explain the major mathematical techniques of filtering and pose graph optimization either using visual or LiDAR SLAM and introduce a summary of the deep learning efficient contribution to the SLAM problem. Finally, we address examples of some existing practical applications of SLAM in our reality

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system

    Information Aided Navigation: A Review

    Full text link
    The performance of inertial navigation systems is largely dependent on the stable flow of external measurements and information to guarantee continuous filter updates and bind the inertial solution drift. Platforms in different operational environments may be prevented at some point from receiving external measurements, thus exposing their navigation solution to drift. Over the years, a wide variety of works have been proposed to overcome this shortcoming, by exploiting knowledge of the system current conditions and turning it into an applicable source of information to update the navigation filter. This paper aims to provide an extensive survey of information aided navigation, broadly classified into direct, indirect, and model aiding. Each approach is described by the notable works that implemented its concept, use cases, relevant state updates, and their corresponding measurement models. By matching the appropriate constraint to a given scenario, one will be able to improve the navigation solution accuracy, compensate for the lost information, and uncover certain internal states, that would otherwise remain unobservable.Comment: 8 figures, 3 table

    A Survey on Global LiDAR Localization

    Full text link
    Knowledge about the own pose is key for all mobile robot applications. Thus pose estimation is part of the core functionalities of mobile robots. In the last two decades, LiDAR scanners have become a standard sensor for robot localization and mapping. This article surveys recent progress and advances in LiDAR-based global localization. We start with the problem formulation and explore the application scope. We then present the methodology review covering various global localization topics, such as maps, descriptor extraction, and consistency checks. The contents are organized under three themes. The first is the combination of global place retrieval and local pose estimation. Then the second theme is upgrading single-shot measurement to sequential ones for sequential global localization. The third theme is extending single-robot global localization to cross-robot localization on multi-robot systems. We end this survey with a discussion of open challenges and promising directions on global lidar localization

    Vision-based localization methods under GPS-denied conditions

    Full text link
    This paper reviews vision-based localization methods in GPS-denied environments and classifies the mainstream methods into Relative Vision Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss the broad application of optical flow in feature extraction-based Visual Odometry (VO) solutions and introduce advanced optical flow estimation methods. For AVL, we review recent advances in Visual Simultaneous Localization and Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman Filter (EKF) based methods. We also introduce the application of offline map registration and lane vision detection schemes to achieve Absolute Visual Localization. This paper compares the performance and applications of mainstream methods for visual localization and provides suggestions for future studies.Comment: 32 pages, 15 figure
    • …
    corecore