1,979 research outputs found

    Inertial Navigation Meets Deep Learning: A Survey of Current Trends and Future Directions

    Full text link
    Inertial sensing is used in many applications and platforms, ranging from day-to-day devices such as smartphones to very complex ones such as autonomous vehicles. In recent years, the development of machine learning and deep learning techniques has increased significantly in the field of inertial sensing and sensor fusion. This is due to the development of efficient computing hardware and the accessibility of publicly available sensor data. These data-driven approaches mainly aim to empower model-based inertial sensing algorithms. To encourage further research in integrating deep learning with inertial navigation and fusion and to leverage their capabilities, this paper provides an in-depth review of deep learning methods for inertial sensing and sensor fusion. We discuss learning methods for calibration and denoising as well as approaches for improving pure inertial navigation and sensor fusion. The latter is done by learning some of the fusion filter parameters. The reviewed approaches are classified by the environment in which the vehicles operate: land, air, and sea. In addition, we analyze trends and future directions in deep learning-based navigation and provide statistical data on commonly used approaches

    Vision-based localization methods under GPS-denied conditions

    Full text link
    This paper reviews vision-based localization methods in GPS-denied environments and classifies the mainstream methods into Relative Vision Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss the broad application of optical flow in feature extraction-based Visual Odometry (VO) solutions and introduce advanced optical flow estimation methods. For AVL, we review recent advances in Visual Simultaneous Localization and Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman Filter (EKF) based methods. We also introduce the application of offline map registration and lane vision detection schemes to achieve Absolute Visual Localization. This paper compares the performance and applications of mainstream methods for visual localization and provides suggestions for future studies.Comment: 32 pages, 15 figure

    Self-Describing Fiducials for GPS-Denied Navigation of Unmanned Aerial Vehicles

    Get PDF
    Accurate estimation of an Unmanned Aerial Vehicle’s (UAV’s) location is critical for the operation of the UAV when it is controlled completely by its onboard processor. This can be particularly challenging in environments in which GPS is not available (GPS-denied). Many of the options previously explored for estimation of a UAV’s location without the use of GPS require more sophisticated processors than can feasibly be mounted on a UAV because of weight, size, and power restrictions. Many options are also aimed at indoor operation without the range capabilities to scale to outdoor operations. This research explores an alternative method of GPS-denied navigation which utilizes line-of-sight measurements to self-describing fiducials to aid in position determination. Each self-describing fiducial is an easily identifiable object fixed at a specific location. Each fiducial relays data containing its specific location to the observing UAV. The UAV can measure its relative position to the fiducial using camera images. This measurement can be combined with measurements from an Inertial Measurement Unit (IMU) to obtain a more accurate estimate of the UAV’s location. In this research, a simulation is used to validate and assess the performance of algorithms used to estimate the UAV’s position using these measurements. This research analyzes the effectiveness of the estimation algorithm when used with various IMUs and fiducial spacings. The effect of how quickly camera images of fiducials can be captured and processed is also analyzed. Preparations for demonstrating this system with hardware are then presented and discussed, including options for fiducial type and a way to measure the true position of the UAV. The results from the simulated scenarios and the hardware demonstration preparation are analyzed, and future work is discussed

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    Get PDF
    The ubiquitous nature of GPS has fostered its widespread integration of navigation into a variety of applications, both civilian and military. One alternative to ensure continued flight operations in GPS-denied environments is vision-aided navigation, an approach that combines visual cues from a camera with an inertial measurement unit (IMU) to estimate the navigation states of a moving body. The majority of vision-based navigation research has been conducted in the electro-optical (EO) spectrum, which experiences limited operation in certain environments. The aim of this work is to explore how such approaches extend to infrared imaging sensors. In particular, it examines the ability of medium-wave infrared (MWIR) imagery, which is capable of operating at night and with increased vision through smoke, to expand the breadth of operations that can be supported by vision-aided navigation. The experiments presented here are based on the Minor Area Motion Imagery (MAMI) dataset that recorded GPS data, inertial measurements, EO imagery, and MWIR imagery captured during flights over Wright-Patterson Air Force Base. The approach applied here combines inertial measurements with EO position estimates from the structure from motion (SfM) algorithm. Although precision timing was not available for the MWIR imagery, the EO-based results of the scene demonstrate that trajectory estimates from SfM offer a significant increase in navigation accuracy when combined with inertial data over using an IMU alone. Results also demonstrated that MWIR-based positions solutions provide a similar trajectory reconstruction to EO-based solutions for the same scenes. While the MWIR imagery and the IMU could not be combined directly, through comparison to the combined solution using EO data the conclusion here is that MWIR imagery (with its unique phenomenologies) is capable of expanding the operating envelope of vision-aided navigation

    Collaborative navigation as a solution for PNT applications in GNSS challenged environments: report on field trials of a joint FIG / IAG working group

    Get PDF
    PNT stands for Positioning, Navigation, and Timing. Space-based PNT refers to the capabilities enabled by GNSS, and enhanced by Ground and Space-based Augmentation Systems (GBAS and SBAS), which provide position, velocity, and timing information to an unlimited number of users around the world, allowing every user to operate in the same reference system and timing standard. Such information has become increasingly critical to the security, safety, prosperity, and overall qualityof-life of many citizens. As a result, space-based PNT is now widely recognized as an essential element of the global information infrastructure. This paper discusses the importance of the availability and continuity of PNT information, whose application, scope and significance have exploded in the past 10–15 years. A paradigm shift in the navigation solution has been observed in recent years. It has been manifested by an evolution from traditional single sensor-based solutions, to multiple sensor-based solutions and ultimately to collaborative navigation and layered sensing, using non-traditional sensors and techniques – so called signals of opportunity. A joint working group under the auspices of the International Federation of Surveyors (FIG) and the International Association of Geodesy (IAG), entitled ‘Ubiquitous Positioning Systems’ investigated the use of Collaborative Positioning (CP) through several field trials over the past four years. In this paper, the concept of CP is discussed in detail and selected results of these experiments are presented. It is demonstrated here, that CP is a viable solution if a ‘network’ or ‘neighbourhood’ of users is to be positioned / navigated together, as it increases the accuracy, integrity, availability, and continuity of the PNT information for all users

    Performance Assessment of an Ultra Low-Cost Inertial Measurement Unit for Ground Vehicle Navigation

    Get PDF
    Nowadays, navigation systems are becoming common in the automotive industry due to advanced driver assistance systems and the development of autonomous vehicles. The MPU-6000 is a popular ultra low-cost Microelectromechanical Systems (MEMS) inertial measurement unit (IMU) used in several applications. Although this mass-market sensor is used extensively in a variety of fields, it has not caught the attention of the automotive industry. Moreover, a detailed performance analysis of this inertial sensor for ground navigation systems is not available in the previous literature. In this work, a deep examination of one MPU-6000 IMU as part of a low-cost navigation system for ground vehicles is provided. The steps to characterize the performance of the MPU-6000 are divided in two phases: static and kinematic analyses. Besides, an additional MEMS IMU of superior quality is also included in all experiments just for the purpose of comparison. After the static analysis, a kinematic test is conducted by generating a real urban trajectory registering an MPU-6000 IMU, the higher-grade MEMS IMU, and two GNSS receivers. The kinematic trajectory is divided in two parts, a normal trajectory with good satellites visibility and a second part where the Global Navigation Satellite System (GNSS) signal is forced to be lost. Evaluating the attitude and position inaccuracies from these two scenarios, it is concluded in this preliminary work that this mass-market IMU can be considered as a convenient inertial sensor for low-cost integrated navigation systems for applications that can tolerate a 3D position error of about 2 m and a heading angle error of about 3 °
    • 

    corecore