8 research outputs found

    Aerial Map-Based Navigation Using Semantic Segmentation and Pattern Matching

    Full text link
    This paper proposes a novel approach to map-based navigation system for unmanned aircraft. The proposed system attempts label-to-label matching, not image-to-image matching between aerial images and a map database. By using semantic segmentation, the ground objects are labelled and the configuration of the objects is used to find the corresponding location in the map database. The use of the deep learning technique as a tool for extracting high-level features reduces the image-based localization problem to a pattern matching problem. This paper proposes a pattern matching algorithm which does not require altitude information or a camera model to estimate the absolute horizontal position. The feasibility analysis with simulated images shows the proposed map-based navigation can be realized with the proposed pattern matching algorithm and it is able to provide positions given the labelled objects.Comment: 6 pages, 4 figure

    GNSS/INS/VO fusion using gated recurrent unit in GNSS denied environments

    Get PDF
    Urban air mobility is a growing market, which will bring new ways to travel and to deliver items covering urban and suburban areas, at relatively low altitudes. To guarantee a safe and robust navigation, Unmanned Aerial Vehicles should be able to overcome all the navigational constraints. The paper is analyzing a novel sensor fusion framework with the aim to obtain a stable flight in a degraded GNSS environment. The sensor fusion framework is combining data coming from a GNSS receiver, an IMU and an optical camera under a loosely coupled scheme. A Federated Filter approach is implemented with the integration of two GRUs blocks. The first GRU is used to increase the accuracy in time of the INS, giving as output a more reliable position that it is fused, with the position information coming from, the GNSS receiver, and the developed Visual Odometry algorithm. Further, a master GRU block is used to select the best position information. The data is collected using a hardware in the loop setup, using AirSim, Pixhawk and Spirent GSS7000 hardware. As validation, the framework is tested, on a virtual UAV, performing a delivery mission on Cranfield university campus. Results showed that the developed fusion framework, can be used for short GNSS outages

    Vision-based localization methods under GPS-denied conditions

    Full text link
    This paper reviews vision-based localization methods in GPS-denied environments and classifies the mainstream methods into Relative Vision Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss the broad application of optical flow in feature extraction-based Visual Odometry (VO) solutions and introduce advanced optical flow estimation methods. For AVL, we review recent advances in Visual Simultaneous Localization and Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman Filter (EKF) based methods. We also introduce the application of offline map registration and lane vision detection schemes to achieve Absolute Visual Localization. This paper compares the performance and applications of mainstream methods for visual localization and provides suggestions for future studies.Comment: 32 pages, 15 figure
    corecore