18 research outputs found

    Multi-Antenna Vision-and-Inertial-Aided CDGNSS for Micro Aerial Vehicle Pose Estimation

    Get PDF
    A system is presented for multi-antenna carrier phase differential GNSS (CDGNSS)-based pose (position and orientation) estimation aided by monocular visual measurements and a smartphone-grade inertial sensor. The system is designed for micro aerial vehicles, but can be applied generally for low-cost, lightweight, high-accuracy, geo-referenced pose estimation. Visual and inertial measurements enable robust operation despite GNSS degradation by constraining uncertainty in the dynamics propagation, which improves fixed-integer CDGNSS availability and reliability in areas with limited sky visibility. No prior work has demonstrated an increased CDGNSS integer fixing rate when incorporating visual measurements with smartphone-grade inertial sensing. A central pose estimation filter receives measurements from separate CDGNSS position and attitude estimators, visual feature measurements based on the ROVIO measurement model, and inertial measurements. The filter's pose estimates are fed back as a prior for CDGNSS integer fixing. A performance analysis under both simulated and real-world GNSS degradation shows that visual measurements greatly increase the availability and accuracy of low-cost inertial-aided CDGNSS pose estimation.Aerospace Engineering and Engineering Mechanic

    GNSS-stereo-inertial SLAM for arable farming

    Get PDF
    The accelerating pace in the automation of agricultural tasks demands highly accurate and robust localization systems for field robots. Simultaneous Localization and Mapping (SLAM) methods inevitably accumulate drift on exploratory trajectories and primarily rely on place revisiting and loop closing to keep a bounded global localization error. Loop closure techniques are significantly challenging in agricultural fields, as the local visual appearance of different views is very similar and might change easily due to weather effects. A suitable alternative in practice is to employ global sensor positioning systems jointly with the rest of the robot sensors. In this paper we propose and implement the fusion of global navigation satellite system (GNSS), stereo views, and inertial measurements for localization purposes. Specifically, we incorporate, in a tightly coupled manner, GNSS measurements into the stereo-inertial ORB-SLAM3 pipeline. We thoroughly evaluate our implementation in the sequences of the Rosario data set, recorded by an autonomous robot in soybean fields, and our own in-house data. Our data includes measurements from a conventional GNSS, rarely included in evaluations of state-of-the-art approaches. We characterize the performance of GNSS-stereo-inertial SLAM in this application case, reporting pose error reductions between 10% and 30% compared to visual-inertial and loosely coupled GNSS-stereo-inertial baselines. In addition to such analysis, we also release the code of our implementation as open source.Comment: This paper has been accepted for publication in Journal of Field Robotics, 202

    GICI-LIB: A GNSS/INS/Camera Integrated Navigation Library

    Full text link
    Accurate navigation is essential for autonomous robots and vehicles. In recent years, the integration of the Global Navigation Satellite System (GNSS), Inertial Navigation System (INS), and camera has garnered considerable attention due to its robustness and high accuracy in diverse environments. In such systems, fully utilizing the role of GNSS is cumbersome because of the diverse choices of formulations, error models, satellite constellations, signal frequencies, and service types, which lead to different precision, robustness, and usage dependencies. To clarify the capacity of GNSS algorithms and accelerate the development efficiency of employing GNSS in multi-sensor fusion algorithms, we open source the GNSS/INS/Camera Integration Library (GICI-LIB), together with detailed documentation and a comprehensive land vehicle dataset. A factor graph optimization-based multi-sensor fusion framework is established, which combines almost all GNSS measurement error sources by fully considering temporal and spatial correlations between measurements. The graph structure is designed for flexibility, making it easy to form any kind of integration algorithm. For illustration, four Real-Time Kinematic (RTK)-based algorithms from GICI-LIB are evaluated using our dataset. Results confirm the potential of the GICI system to provide continuous precise navigation solutions in a wide spectrum of urban environments.Comment: Open-source: https://github.com/chichengcn/gici-open. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    3D LiDAR Aided GNSS NLOS Mitigation for Reliable GNSS-RTK Positioning in Urban Canyons

    Full text link
    GNSS and LiDAR odometry are complementary as they provide absolute and relative positioning, respectively. Their integration in a loosely-coupled manner is straightforward but is challenged in urban canyons due to the GNSS signal reflections. Recent proposed 3D LiDAR-aided (3DLA) GNSS methods employ the point cloud map to identify the non-line-of-sight (NLOS) reception of GNSS signals. This facilitates the GNSS receiver to obtain improved urban positioning but not achieve a sub-meter level. GNSS real-time kinematics (RTK) uses carrier phase measurements to obtain decimeter-level positioning. In urban areas, the GNSS RTK is not only challenged by multipath and NLOS-affected measurement but also suffers from signal blockage by the building. The latter will impose a challenge in solving the ambiguity within the carrier phase measurements. In the other words, the model observability of the ambiguity resolution (AR) is greatly decreased. This paper proposes to generate virtual satellite (VS) measurements using the selected LiDAR landmarks from the accumulated 3D point cloud maps (PCM). These LiDAR-PCM-made VS measurements are tightly-coupled with GNSS pseudorange and carrier phase measurements. Thus, the VS measurements can provide complementary constraints, meaning providing low-elevation-angle measurements in the across-street directions. The implementation is done using factor graph optimization to solve an accurate float solution of the ambiguity before it is fed into LAMBDA. The effectiveness of the proposed method has been validated by the evaluation conducted on our recently open-sourced challenging dataset, UrbanNav. The result shows the fix rate of the proposed 3DLA GNSS RTK is about 30% while the conventional GNSS-RTK only achieves about 14%. In addition, the proposed method achieves sub-meter positioning accuracy in most of the data collected in challenging urban areas

    Cooperation and Autonomy for UAV Swarms

    Get PDF
    In the last few years, the level of autonomy of mini- and micro-Unmanned Aerial Vehicles (UAVs) has increased thanks to the miniaturization of flight control systems and payloads, and the availability of computationally affordable algorithms for autonomous Guidance Navigation and Control (GNC). However, despite the technological evolution, operations conducted by a single micro-UAV still present limits in terms of performance, coverage and reliability. The scope of this thesis is to overcome single-UAV limits by developing new distributed GNC architectures and technologies where the cooperative nature of a UAV formation is exploited to obtain navigation information. Moreover, this thesis aims at increasing UAVs autonomy by developing a take-off and landing technique which permits to complete fully autonomous operations, also taking into account regulations and the required level of safety. Indeed, in addition to the typical performance limitations of micro-UAVs, this thesis takes into account also those applications where a multi-vehicle architecture can improve coverage and reliability, and allow real time data fusion. Furthermore, considering the low cost of micro-UAV systems with consumer grade avionics, having several UAVs can be more cost effective than equipping a single vehicle with high performance equipment. Among several research challenges to be addressed in order to design and operate a distributed system of vehicles working together for real time applications, this thesis focuses on the following topics regarding cooperation and autonomy: Improvement of UAV navigation performance: This research topic aims at improving the navigation performance of an UAV flying cooperatively with one or more UAVs, considering that the only integration of low cost inertial measurement units (IMUs), Global Navigation Satellite Systems (GNSS) and magnetometers allows real time stabilization and flight control but may not be suitable for applications requiring fine sensor pointing. The focus is set on outdoor environments and it is assumed that all vehicles of the formation are flying under nominal Global Positioning System (GPS) coverage, hence, the main navigation improvement is in terms of attitude estimation. In particular, the key concept is to exploit Differential GPS (DGPS) among vehicles and vision-based tracking to build a virtual additional navigation sensor whose information is then integrated within a sensor fusion algorithm based on an Extended Kalman Filter (EKF). Both numerical simulations and flight results show the potential of sub-degree angular accuracy. In particular, proper formation geometries, and even relatively small baselines, allow achieving a heading uncertainty that can approach 0.1°, which represents a very important result taking into account typical performance levels of IMUs onboard small UAVs. UAV navigation in GPS challenging environments: This research topic aims at developing algorithms for improving navigation performance of UAVs flying in GPS-challenging environments (e.g. natural or urban canyons, or mixed outdoor-indoor settings), where GPS measurements can be unavailable and/or unreliable. These algorithms exploit aiding measurements from one or more cooperative UAVs flying under nominal GPS coverage and are based on the concepts of relative sensing and information sharing. The developed sensor fusion architecture is based on a tightly coupled EKF that integrates measurements from onboard inertial sensors and magnetometers, the available GPS pseudoranges, position information from cooperative UAVs, and line-of-sight information derived by visual sensors. In addition, if available, measurements coming from a monocular pose estimation algorithm can be integrated within the developed EKF in order to counteract the position error drift. Results show that aiding measurements from a single cooperative UAV do not allow eliminating position error drift. However, combining this approach with a standalone visual-SLAM, integrating valid pseudoranges in the tightly coupled filtering structure, or exploiting ad hoc commanded motion of the cooperative vehicle under GPS coverage drastically reduces the position error drift keeping meter-level positioning accuracy also in absence of reliable GPS observables. Autonomous take-off and landing: This research activity, conducted during a 6 month Academic Guest period at ETH Zürich, focuses on increasing reliability, versatility and flight time of UAVs, by developing an autonomous take-off and landing technique. Often, the landing phase is the most critical as it involves performing delicate maneuvers; e.g., landing on a station for recharging or on a ground carrier for transportation. These procedures are subject to constraints on time and space and must be robust to changes in environmental conditions. These problems are addressed in this thesis, where a guidance approach, based on the intrinsic Tau guidance theory, is integrated within the end-to-end software developed at ETH Zürich. This method has been validated both in simulations and through real platform experiments by using rotary-wing UAVs to land on static platforms. Results show that this method achieves smooth landings within 10 cm accuracy, with easily adjustable trajectory parameters

    Optical Navigation Sensor for Runway Relative Positioning of Aircraft during Final Approach

    Get PDF
    Precise navigation is often performed by sensor fusion of different sensors. Among these sensors, optical sensors use image features to obtain the position and attitude of the camera. Runway relative navigation during final approach is a special case where robust and continuous detection of the runway is required. This paper presents a robust threshold marker detection method for monocular cameras and introduces an on-board real-time implementation with flight test results. Results with narrow and wide field-of-view optics are compared. The image processing approach is also evaluated on image data captured by a different on-board system. The pure optical approach of this paper increases sensor redundancy because it does not require input from an inertial sensor as most of the robust runway detectors

    Improving Navigation in GNSS-challenging Environments: Multi-UAS Cooperation and Generalized Dilution of Precision

    Get PDF
    This paper presents an approach to tackle navigation challenges for Unmanned Aircraft Systems flying under non nominal GNSS coverage. The concept used to improve navigation performance in these environments consists in using one or more cooperative platforms and relative sensing measurements (based on vision and/or ranging) to the navigation aid. The paper details the cooperative navigation filter which can exploit multiple cooperative platforms and multiple relative measurements, while also using partial GNSS information. The achievable navigation accuracy can be predicted using the concept of "generalized dilution of precision", which derives from applying the idea of dilution of precision to the mathematical structure of the cooperative navigation filter. Values and trends of generalized dilution of precision are discussed as a function of the relative geometry in common GNSS-challenging scenarios. Finally, navigation performance is assessed based on simulations and on multi-drone flight tests

    Advanced Location-Based Technologies and Services

    Get PDF
    Since the publication of the first edition in 2004, advances in mobile devices, positioning sensors, WiFi fingerprinting, and wireless communications, among others, have paved the way for developing new and advanced location-based services (LBSs). This second edition provides up-to-date information on LBSs, including WiFi fingerprinting, mobile computing, geospatial clouds, geospatial data mining, location privacy, and location-based social networking. It also includes new chapters on application areas such as LBSs for public health, indoor navigation, and advertising. In addition, the chapter on remote sensing has been revised to address advancements
    corecore