17 research outputs found

    LIDAR-Aided Inertial Navigation with Extended Kalman Filtering for Pinpoint Landing

    Get PDF
    In support of NASA s Autonomous Landing and Hazard Avoidance Technology (ALHAT) project, an extended Kalman filter routine has been developed for estimating the position, velocity, and attitude of a spacecraft during the landing phase of a planetary mission. The proposed filter combines measurements of acceleration and angular velocity from an inertial measurement unit (IMU) with range and Doppler velocity observations from an onboard light detection and ranging (LIDAR) system. These high-precision LIDAR measurements of distance to the ground and approach velocity will enable both robotic and manned vehicles to land safely and precisely at scientifically interesting sites. The filter has been extensively tested using a lunar landing simulation and shown to improve navigation over flat surfaces or rough terrain. Experimental results from a helicopter flight test performed at NASA Dryden in August 2008 demonstrate that LIDAR can be employed to significantly improve navigation based exclusively on IMU integration

    Accurate IMU Preintegration Using Switched Linear Systems For Autonomous Systems

    Full text link
    Employing an inertial measurement unit (IMU) as an additional sensor can dramatically improve both reliability and accuracy of visual/Lidar odometry (VO/LO). Different IMU integration models are introduced using different assumptions on the linear acceleration from the IMU. In this paper, a novel IMU integration model is proposed by using switched linear systems. The proposed approach assumes that both the linear acceleration and the angular velocity in the body frame are constant between two consecutive IMU measurements. This is more realistic in real world situation compared to existing approaches which assume that linear acceleration is constant in the world frame while angular velocity is constant in the body frame between two successive IMU measurements. Experimental results show that the proposed approach outperforms the state-of-the-art IMU integration model. The proposed model is thus important for localization of high speed autonomous vehicles in GPS denied environments.Comment: 19 pages, 2 Figures, Accepted for publication by the IEEE Intelligent Transportation Systems Conference (ITSC 2019). Additionally, Supplementary Derivations on the Pape

    A Novel Fusion Scheme for Vision Aided Inertial Navigation of Aerial Vehicles

    Get PDF
    Vision-aided inertial navigation is an important and practical mode of integrated navigation for aerial vehicles. In this paper, a novel fusion scheme is proposed and developed by using the information from inertial navigation system (INS) and vision matching subsystem. This scheme is different from the conventional Kalman filter (CKF); CKF treats these two information sources equally even though vision-aided navigation is linked to uncertainty and inaccuracy. Eventually, by concentrating on reliability of vision matching, the fusion scheme of integrated navigation is upgraded. Not only matching positions are used, but also their reliable extents are considered. Moreover, a fusion algorithm is designed and proved to be the optimal as it minimizes the variance in terms of mean square error estimation. Simulations are carried out to validate the effectiveness of this novel navigation fusion scheme. Results show the new fusion scheme outperforms CKF and adaptive Kalman filter (AKF) in vision/INS estimation under given scenarios and specifications

    Kernelized Locality-Sensitive Hashing for Fast Image Landmark Association

    Get PDF
    As the concept of war has evolved, navigation in urban environments where GPS may be degraded is increasingly becoming more important. Two existing solutions are vision-aided navigation and vision-based Simultaneous Localization and Mapping (SLAM). The problem, however, is that vision-based navigation techniques can require excessive amounts of memory and increased computational complexity resulting in a decrease in speed. This research focuses on techniques to improve such issues by speeding up and optimizing the data association process in vision-based SLAM. Specifically, this work studies the current methods that algorithms use to associate a current robot pose to that of one previously seen and introduce another method to the image mapping arena for comparison. The current method, kd-trees, is effcient in lower dimensions, but does not narrow the search space enough in higher dimensional datasets. In this research, Kernelized Locality-Sensitive Hashing (KLSH) is implemented to conduct the aforementioned pose associations. Results on KLSH shows that fewer image comparisons are required for location identification than that of other methods. This work can then be extended into a vision-SLAM implementation to subsequently produce a map

    GPS-denied multi-agent localization and terrain classification for autonomous parafoil systems

    Full text link
    Guided airdrop parafoil systems depend on GPS for localization and landing. In some scenarios, GPS may be unreliable (jammed, spoofed, or disabled), or unavailable (indoor, or extraterrestrial environments). In the context of guided parafoils, landing locations for each system must be pre-programmed manually with global coordinates, which may be inaccurate or outdated, and offer no in-flight adaptability. Parafoil systems in particular have constrained motion, communication, and on-board computation and storage capabilities, and must operate in harsh conditions. These constraints necessitate a comprehensive approach to address the fundamental limitations of these systems when GPS cannot be used reliably. A novel and minimalist approach to visual navigation and multi-agent communication using semantic machine learning classification and geometric constraints is introduced. This approach enables localization and landing site identification for multiple communicating parafoil systems deployed in GPS-denied environments

    GPS-denied multi-agent localization and terrain classification for autonomous parafoil systems

    Full text link
    Guided airdrop parafoil systems depend on GPS for localization and landing. In some scenarios, GPS may be unreliable (jammed, spoofed, or disabled), or unavailable (indoor, or extraterrestrial environments). In the context of guided parafoils, landing locations for each system must be pre-programmed manually with global coordinates, which may be inaccurate or outdated, and offer no in-flight adaptability. Parafoil systems in particular have constrained motion, communication, and on-board computation and storage capabilities, and must operate in harsh conditions. These constraints necessitate a comprehensive approach to address the fundamental limitations of these systems when GPS cannot be used reliably. A novel and minimalist approach to visual navigation and multi-agent communication using semantic machine learning classification and geometric constraints is introduced. This approach enables localization and landing site identification for multiple communicating parafoil systems deployed in GPS-denied environments

    Real-Time GPS-Alternative Navigation Using Commodity Hardware

    Get PDF
    Modern navigation systems can use the Global Positioning System (GPS) to accurately determine position with precision in some cases bordering on millimeters. Unfortunately, GPS technology is susceptible to jamming, interception, and unavailability indoors or underground. There are several navigation techniques that can be used to navigate during times of GPS unavailability, but there are very few that result in GPS-level precision. One method of achieving high precision navigation without GPS is to fuse data obtained from multiple sensors. This thesis explores the fusion of imaging and inertial sensors and implements them in a real-time system that mimics human navigation. In addition, programmable graphics processing unit technology is leveraged to perform stream-based image processing using a computer\u27s video card. The resulting system can perform complex mathematical computations in a fraction of the time those same operations would take on a CPU-based platform. The resulting system is an adaptable, portable, inexpensive and self-contained software and hardware platform, which paves the way for advances in autonomous navigation, mobile cartography, and artificial intelligence

    Calibration of Linear Imager Camera for Relative Pose Estimation

    Get PDF
    The process of camera calibration is of paramount importance in order to employ any vision based sensor for relative navigation purposes. Understanding and quantifying the physical process that converts the external electromagnetic stimulus into an image inside a camera is key to relating the position of a body in an image to its pose in the real world. Both camera calibration and relative navigation are extensively explored topics. In the topic of camera calibration, various algorithms have been proposed that model the image formation process in different ways. This research utilizes the Homography approach proposed by Zhang [1] along with two distortion models: Brown’s nonlinear Distortion Model and the Geometric Distortion Model in order to model the intrinsic distortion and discrete image formation process. The idea of this research is to utilize the intrinsic parameters estimated using the homography optimization approach for the estimation of the relative pose of an object in the camera’s field of view. A nonlinear optimization based approach is presented for this purpose. The camera used here is the Phasespace Motion Capture camera [2] which utilizes linear imagers to form a fictitious image plane. Hence, the applicability of the two distortion models is tested through multiple datasets. Through testing with three datasets, it is found that neither distortion model is adequate to describe the distortion and image formation process in the Phasespace camera. A further test is conducted in order to validate the efficacy of the optimization based approach for relative pose estimation
    corecore