232 research outputs found

    Mitigation of odometry drift with a single ranging link in GNSS-limited environments

    Get PDF
    Vision-based systems can estimate the vehicle's positions and attitude with a low cost and simple implementation, but the performance is very sensitive to environmental conditions. Moreover, estimation errors are accumulated without a bound since visual odometry is a dead-reckoning process. To improve the robustness to environmental conditions, vision-based systems can be augmented with inertial sensors, and the loop closing technique can be applied to reduce the drift. However, only with on-board sensors, vehicle's poses can only be estimated in a local navigation frame, which is randomly defined for each mission. To obtain globally-referred poses, absolute position estimates obtained with GNSS can be fused with on-board measurements (obtained with either vision-only or visual-inertial odometry). However, in many cases (e.g. urban canyons, indoor environments), GNSS-based positioning is unreliable or entirely unavailable due to signal interruptions and blocking, while we can still obtain ranging links from various sources, such as signals of opportunity or low cost radio-based ranging modules. We propose a graph-based data fusion method of the on-board odometry data and ranging measurements to mitigate pose drifts in environments where GNSS-based positioning is unavailable. The proposed algorithm is evaluated both with synthetic and real data

    Homography-Based State Estimation for Autonomous Exploration in Unknown Environments

    Get PDF
    This thesis presents the development of vision-based state estimation algorithms to enable a quadcopter UAV to navigate and explore a previously unknown GPS denied environment. These state estimation algorithms are based on tracked Speeded-Up Robust Features (SURF) points and the homography relationship that relates the camera motion to the locations of tracked planar feature points in the image plane. An extended Kalman filter implementation is developed to perform sensor fusion using measurements from an onboard inertial measurement unit (accelerometers and rate gyros) with vision-based measurements derived from the homography relationship. Therefore, the measurement update in the filter requires the processing of images from a monocular camera to detect and track planar feature points followed by the computation of homography parameters. The state estimation algorithms are designed to be independent of GPS since GPS can be unreliable or unavailable in many operational environments of interest such as urban environments. The state estimation algorithms are implemented using simulated data from a quadcopter UAV and then tested using post processed video and IMU data from flights of an autonomous quadcopter. The homography-based state estimation algorithm was effective, but accumulates drift errors over time due to the relativistic homography measurement of position

    Visual-UWB Navigation System for Unknown Environments

    Full text link
    Navigation applications relying on the Global Navigation Satellite System (GNSS) are limited in indoor environments and GNSS-denied outdoor terrains such as dense urban or forests. In this paper, we present a novel accurate, robust and low-cost GNSS-independent navigation system, which is composed of a monocular camera and Ultra-wideband (UWB) transceivers. Visual techniques have gained excellent results when computing the incremental motion of the sensor, and UWB methods have proved to provide promising localization accuracy due to the high time resolution of the UWB ranging signals. However, the monocular visual techniques with scale ambiguity are not suitable for applications requiring metric results, and UWB methods assume that the positions of the UWB transceiver anchor are pre-calibrated and known, thus precluding their application in unknown and challenging environments. To this end, we advocate leveraging the monocular camera and UWB to create a map of visual features and UWB anchors. We propose a visual-UWB Simultaneous Localization and Mapping (SLAM) algorithm which tightly combines visual and UWB measurements to form a joint non-linear optimization problem on Lie-Manifold. The 6 Degrees of Freedom (DoF) state of the vehicles and the map are estimated by minimizing the UWB ranging errors and landmark reprojection errors. Our navigation system starts with an exploratory task which performs the real-time visual-UWB SLAM to obtain the global map, then the navigation task by reusing this global map. The tasks can be performed by different vehicles in terms of equipped sensors and payload capability in a heterogeneous team. We validate our system on the public datasets, achieving typical centimeter accuracy and 0.1% scale error.Comment: Proceedings of the 31st International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2018

    Cooperative swarm localization and mapping with inter-agent ranging

    Get PDF
    Compared to a single robot, a swarm system can conduct a given task in a shorter time, and it is more robust to system failures of each agent. To successfully execute cooperative missions with multiple agents, accurate relative positioning is important. If global positioning (e.g. with a GNSSbased positioning) is available, we can easily compute relative positions. In environments where a global positioning system is unreliable or unavailable, visual odometry can be applied for estimating each agent's egomotion, by exploiting onboard cameras. Using these self-localization results, relative positions between agents can be estimated, once the relative geometry between agents is initialized. However, since visual odometry is a dead-reckoning process, the estimation errors accumulate inherently without bounds. We propose a cooperative localization method using visual odometry and inter-agent range measurements. Using the proposed method, we can reduce the drifts in position estimates with very modest requirements on the communication channel between agents

    Robust Indoor Localization with Ranging-IMU Fusion

    Full text link
    Indoor wireless ranging localization is a promising approach for low-power and high-accuracy localization of wearable devices. A primary challenge in this domain stems from non-line of sight propagation of radio waves. This study tackles a fundamental issue in wireless ranging: the unpredictability of real-time multipath determination, especially in challenging conditions such as when there is no direct line of sight. We achieve this by fusing range measurements with inertial measurements obtained from a low cost Inertial Measurement Unit (IMU). For this purpose, we introduce a novel asymmetric noise model crafted specifically for non-Gaussian multipath disturbances. Additionally, we present a novel Levenberg-Marquardt (LM)-family trust-region adaptation of the iSAM2 fusion algorithm, which is optimized for robust performance for our ranging-IMU fusion problem. We evaluate our solution in a densely occupied real office environment. Our proposed solution can achieve temporally consistent localization with an average absolute accuracy of ∼\sim0.3m in real-world settings. Furthermore, our results indicate that we can achieve comparable accuracy even with infrequent (1Hz) range measurements

    Towards Collaborative Simultaneous Localization and Mapping: a Survey of the Current Research Landscape

    Get PDF
    Motivated by the tremendous progress we witnessed in recent years, this paper presents a survey of the scientific literature on the topic of Collaborative Simultaneous Localization and Mapping (C-SLAM), also known as multi-robot SLAM. With fleets of self-driving cars on the horizon and the rise of multi-robot systems in industrial applications, we believe that Collaborative SLAM will soon become a cornerstone of future robotic applications. In this survey, we introduce the basic concepts of C-SLAM and present a thorough literature review. We also outline the major challenges and limitations of C-SLAM in terms of robustness, communication, and resource management. We conclude by exploring the area's current trends and promising research avenues.Comment: 44 pages, 3 figure

    Location-Enabled IoT (LE-IoT): A Survey of Positioning Techniques, Error Sources, and Mitigation

    Get PDF
    The Internet of Things (IoT) has started to empower the future of many industrial and mass-market applications. Localization techniques are becoming key to add location context to IoT data without human perception and intervention. Meanwhile, the newly-emerged Low-Power Wide-Area Network (LPWAN) technologies have advantages such as long-range, low power consumption, low cost, massive connections, and the capability for communication in both indoor and outdoor areas. These features make LPWAN signals strong candidates for mass-market localization applications. However, there are various error sources that have limited localization performance by using such IoT signals. This paper reviews the IoT localization system through the following sequence: IoT localization system review -- localization data sources -- localization algorithms -- localization error sources and mitigation -- localization performance evaluation. Compared to the related surveys, this paper has a more comprehensive and state-of-the-art review on IoT localization methods, an original review on IoT localization error sources and mitigation, an original review on IoT localization performance evaluation, and a more comprehensive review of IoT localization applications, opportunities, and challenges. Thus, this survey provides comprehensive guidance for peers who are interested in enabling localization ability in the existing IoT systems, using IoT systems for localization, or integrating IoT signals with the existing localization sensors
    • …
    corecore