442 research outputs found

    3D LiDAR Aided GNSS NLOS Mitigation for Reliable GNSS-RTK Positioning in Urban Canyons

    Full text link
    GNSS and LiDAR odometry are complementary as they provide absolute and relative positioning, respectively. Their integration in a loosely-coupled manner is straightforward but is challenged in urban canyons due to the GNSS signal reflections. Recent proposed 3D LiDAR-aided (3DLA) GNSS methods employ the point cloud map to identify the non-line-of-sight (NLOS) reception of GNSS signals. This facilitates the GNSS receiver to obtain improved urban positioning but not achieve a sub-meter level. GNSS real-time kinematics (RTK) uses carrier phase measurements to obtain decimeter-level positioning. In urban areas, the GNSS RTK is not only challenged by multipath and NLOS-affected measurement but also suffers from signal blockage by the building. The latter will impose a challenge in solving the ambiguity within the carrier phase measurements. In the other words, the model observability of the ambiguity resolution (AR) is greatly decreased. This paper proposes to generate virtual satellite (VS) measurements using the selected LiDAR landmarks from the accumulated 3D point cloud maps (PCM). These LiDAR-PCM-made VS measurements are tightly-coupled with GNSS pseudorange and carrier phase measurements. Thus, the VS measurements can provide complementary constraints, meaning providing low-elevation-angle measurements in the across-street directions. The implementation is done using factor graph optimization to solve an accurate float solution of the ambiguity before it is fed into LAMBDA. The effectiveness of the proposed method has been validated by the evaluation conducted on our recently open-sourced challenging dataset, UrbanNav. The result shows the fix rate of the proposed 3DLA GNSS RTK is about 30% while the conventional GNSS-RTK only achieves about 14%. In addition, the proposed method achieves sub-meter positioning accuracy in most of the data collected in challenging urban areas

    Localization and Mapping for Autonomous Driving: Fault Detection and Reliability Analysis

    Full text link
    Autonomous driving has advanced rapidly during the past decades and has expanded its application for multiple fields, both indoor and outdoor. One of the significant issues associated with a highly automated vehicle (HAV) is how to increase the safety level. A key requirement to ensure the safety of automated driving is the ability of reliable localization and navigation, with which intelligent vehicle/robot systems could successfully make reliable decisions for the driving path or react to the sudden events occurring within the path. A map with rich environment information is essential to support autonomous driving system to meet these high requirements. Therefore, multi-sensor-based localization and mapping methods are studied in this Thesis. Although some studies have been conducted in this area, a full quality control scheme to guarantee the reliability and to detect outliers in localization and mapping systems is still lacking. The quality of the integration system has not been sufficiently evaluated. In this research, an extended Kalman filter and smoother based quality control (EKF/KS QC) scheme is investigated and has been successfully applied for different localization and mapping scenarios. An EKF/KS QC toolbox is developed in MATLAB, which can be easily embedded and applied into different localization and mapping scenarios. The major contributions of this research are: a) The equivalence between least squares and smoothing is discussed, and an extended Kalman filter-smoother quality control method is developed according to this equivalence, which can not only be used to deal with system model outlier with detection, and identification, can also be used to analyse, control and improve the system quality. Relevant mathematical models of this quality control method have been developed to deal with issues such as singular measurement covariance matrices, and numerical instability of smoothing. b) Quality control analysis is conducted for different positioning system, including Global Navigation Satellite System (GNSS) multi constellation integration for both Real Time Kinematic (RTK) and Post Processing Kinematic (PPK), and the integration of GNSS and Inertial Navigation System (INS). The results indicate PPK method can provide more reliable positioning results than RTK. With the proposed quality control method, the influence of the detected outlier can be mitigated by directly correcting the input measurement with the estimated outlier value, or by adapting the final estimation results with the estimated outlier’s influence value. c) Mathematical modelling and quality control aspects for online simultaneous localization and mapping (SLAM) are examined. A smoother based offline SLAM method is investigated with quality control. Both outdoor and indoor datasets have been tested with these SLAM methods. Geometry analysis for the SLAM system has been done according to the quality control results. The system reliability analysis is essential for the SLAM designer as it can be conducted at the early stage without real-world measurement. d) A least squares based localization method is proposed that treats the High-Definition (HD) map as a sensor source. This map-based sensor information is integrated with other perception sensors, which significantly improves localization efficiency and accuracy. Geometry analysis is undertaken with the quality measures to analyse the influence of the geometry upon the estimation solution and the system quality, which can be hints for future design of the localization system. e) A GNSS/INS aided LiDAR mapping and localization procedure is developed. A high-density map is generated offline, then, LiDAR-based localization can be undertaken online with this pre-generated map. Quality control is conducted for this system. The results demonstrate that the LiDAR based localization within map can effectively improve the accuracy and reliability compared to the GNSS/INS only system, especially during the period that GNSS signal is lost

    Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

    Get PDF
    Micro aerial vehicles (MAVs) are ideal platforms for surveillance and search and rescue in confined indoor and outdoor environments due to their small size, superior mobility, and hover capability. In such missions, it is essential that the MAV is capable of autonomous flight to minimize operator workload. Despite recent successes in commercialization of GPS-based autonomous MAVs, autonomous navigation in complex and possibly GPS-denied environments gives rise to challenging engineering problems that require an integrated approach to perception, estimation, planning, control, and high level situational awareness. Among these, state estimation is the first and most critical component for autonomous flight, especially because of the inherently fast dynamics of MAVs and the possibly unknown environmental conditions. In this thesis, we present methodologies and system designs, with a focus on state estimation, that enable a light-weight off-the-shelf quadrotor MAV to autonomously navigate complex unknown indoor and outdoor environments using only onboard sensing and computation. We start by developing laser and vision-based state estimation methodologies for indoor autonomous flight. We then investigate fusion from heterogeneous sensors to improve robustness and enable operations in complex indoor and outdoor environments. We further propose estimation algorithms for on-the-fly initialization and online failure recovery. Finally, we present planning, control, and environment coverage strategies for integrated high-level autonomy behaviors. Extensive online experimental results are presented throughout the thesis. We conclude by proposing future research opportunities

    Federated Meta Learning for Visual Navigation in GPS-denied Urban Airspace

    Get PDF
    Urban air mobility (UAM) is one of the most critical research areas which combines vehicle technology, infrastructure, communication, and air traffic management topics within its identical and novel requirement set. Navigation system requirements have become much more important to perform safe operations in urban environments in which these systems are vulnerable to cyber-attacks. Although the global navigation satellite system (GNSS) is a state-of-the-art solution to obtain position, navigation, and timing (PNT) information, it is necessary to design a redundant and GNSS-independent navigation system to support the localization process in GNSS-denied conditions. Recently, Artificial intelligence (AI)-based visual navigation solutions are widely used because of their robustness against challenging conditions such as low-texture and low-illumination situations. However, they have weak adaptability to new environments if the size of the dataset is not sufficient to train and validate the system. To address these problems, federated meta learning can help fast adaptation to new operation conditions with small dataset, but different visual sensor characteristics and adversarial attacks add considerable complexity in utilizing federated meta learning for navigation. Therefore, we proposed a robust-by-design Federated Meta Learning based visual odometry algorithm to improve pose estimation accuracy, dynamically adapt to various environments by using differentiable meta models and tunning its architecture to defense against cyber-attacks on the image data. In this proposed method, multiple learning loops (inner-loop and outer-loop) are dynamically generated. Each vehicle utilizes its collected visual data in different flight conditions to train its own neural network locally for a particular condition in the inner loops. Then, vehicles collaboratively train a global model in the outer loop which has generalizability across heterogeneous vehicles to enable lifelong learning. The inner loop is used to train a task-specific model based on local data, and the outer loop is to extract common features from similar tasks and optimize meta-model adaptability of similar tasks in navigation. Moreover, a detection model is designed by utilizing key characteristics in trained neural network model parameters to identify attacks

    Federated meta learning for visual navigation in GPS-denied urban airspace

    Get PDF
    In this paper, we have proposed a novel FLVO framework which can improve pose estimation accuracy in terms of translational and rotational RMSE drift while reducing security and privacy risks. It also enables fast adaptation to new conditions thanks to the aggregation process of the local agents which operate in different environments. In addition, we have shown that it is possible to transfer an end-to-end visual odometry agent that is trained by using ground vehicle dataset (i.e. KITTI dataset) to an aerial vehicle pose estimation problem for low-altitude and low-speed operating conditions. Dataset size is an important topic that should be considered in both AI-based end-to-end visual odometry applications and federated learning approaches. Although it is demonstrated that federated learning could be applied for visual odometry applications to aggregate the agents that are trained in different environments, more data should be collected to improve the translational and rotational pose estimation performance of the aggregated agents. In our future work, we will evaluate cyber-attack detection performance of the proposed FLVO framework by utilizing multiple learning loops. In addition, dataset size will be expanded by utilizing real flight tests to increase the realm of the training data and to improve the robustness of the proposed federated learning based end-to-end visual odometry algorithm

    Perception of Unstructured Environments for Autonomous Off-Road Vehicles

    Get PDF
    Autonome Fahrzeuge benötigen die Fähigkeit zur Perzeption als eine notwendige Voraussetzung für eine kontrollierbare und sichere Interaktion, um ihre Umgebung wahrzunehmen und zu verstehen. Perzeption für strukturierte Innen- und Außenumgebungen deckt wirtschaftlich lukrative Bereiche, wie den autonomen Personentransport oder die Industrierobotik ab, während die Perzeption unstrukturierter Umgebungen im Forschungsfeld der Umgebungswahrnehmung stark unterrepräsentiert ist. Die analysierten unstrukturierten Umgebungen stellen eine besondere Herausforderung dar, da die vorhandenen, natürlichen und gewachsenen Geometrien meist keine homogene Struktur aufweisen und ähnliche Texturen sowie schwer zu trennende Objekte dominieren. Dies erschwert die Erfassung dieser Umgebungen und deren Interpretation, sodass Perzeptionsmethoden speziell für diesen Anwendungsbereich konzipiert und optimiert werden müssen. In dieser Dissertation werden neuartige und optimierte Perzeptionsmethoden für unstrukturierte Umgebungen vorgeschlagen und in einer ganzheitlichen, dreistufigen Pipeline für autonome Geländefahrzeuge kombiniert: Low-Level-, Mid-Level- und High-Level-Perzeption. Die vorgeschlagenen klassischen Methoden und maschinellen Lernmethoden (ML) zur Perzeption bzw.~Wahrnehmung ergänzen sich gegenseitig. Darüber hinaus ermöglicht die Kombination von Perzeptions- und Validierungsmethoden für jede Ebene eine zuverlässige Wahrnehmung der möglicherweise unbekannten Umgebung, wobei lose und eng gekoppelte Validierungsmethoden kombiniert werden, um eine ausreichende, aber flexible Bewertung der vorgeschlagenen Perzeptionsmethoden zu gewährleisten. Alle Methoden wurden als einzelne Module innerhalb der in dieser Arbeit vorgeschlagenen Perzeptions- und Validierungspipeline entwickelt, und ihre flexible Kombination ermöglicht verschiedene Pipelinedesigns für eine Vielzahl von Geländefahrzeugen und Anwendungsfällen je nach Bedarf. Low-Level-Perzeption gewährleistet eine eng gekoppelte Konfidenzbewertung für rohe 2D- und 3D-Sensordaten, um Sensorausfälle zu erkennen und eine ausreichende Genauigkeit der Sensordaten zu gewährleisten. Darüber hinaus werden neuartige Kalibrierungs- und Registrierungsansätze für Multisensorsysteme in der Perzeption vorgestellt, welche lediglich die Struktur der Umgebung nutzen, um die erfassten Sensordaten zu registrieren: ein halbautomatischer Registrierungsansatz zur Registrierung mehrerer 3D~Light Detection and Ranging (LiDAR) Sensoren und ein vertrauensbasiertes Framework, welches verschiedene Registrierungsmethoden kombiniert und die Registrierung verschiedener Sensoren mit unterschiedlichen Messprinzipien ermöglicht. Dabei validiert die Kombination mehrerer Registrierungsmethoden die Registrierungsergebnisse in einer eng gekoppelten Weise. Mid-Level-Perzeption ermöglicht die 3D-Rekonstruktion unstrukturierter Umgebungen mit zwei Verfahren zur Schätzung der Disparität von Stereobildern: ein klassisches, korrelationsbasiertes Verfahren für Hyperspektralbilder, welches eine begrenzte Menge an Test- und Validierungsdaten erfordert, und ein zweites Verfahren, welches die Disparität aus Graustufenbildern mit neuronalen Faltungsnetzen (CNNs) schätzt. Neuartige Disparitätsfehlermetriken und eine Evaluierungs-Toolbox für die 3D-Rekonstruktion von Stereobildern ergänzen die vorgeschlagenen Methoden zur Disparitätsschätzung aus Stereobildern und ermöglichen deren lose gekoppelte Validierung. High-Level-Perzeption konzentriert sich auf die Interpretation von einzelnen 3D-Punktwolken zur Befahrbarkeitsanalyse, Objekterkennung und Hindernisvermeidung. Eine Domänentransferanalyse für State-of-the-art-Methoden zur semantischen 3D-Segmentierung liefert Empfehlungen für eine möglichst exakte Segmentierung in neuen Zieldomänen ohne eine Generierung neuer Trainingsdaten. Der vorgestellte Trainingsansatz für 3D-Segmentierungsverfahren mit CNNs kann die benötigte Menge an Trainingsdaten weiter reduzieren. Methoden zur Erklärbarkeit künstlicher Intelligenz vor und nach der Modellierung ermöglichen eine lose gekoppelte Validierung der vorgeschlagenen High-Level-Methoden mit Datensatzbewertung und modellunabhängigen Erklärungen für CNN-Vorhersagen. Altlastensanierung und Militärlogistik sind die beiden Hauptanwendungsfälle in unstrukturierten Umgebungen, welche in dieser Arbeit behandelt werden. Diese Anwendungsszenarien zeigen auch, wie die Lücke zwischen der Entwicklung einzelner Methoden und ihrer Integration in die Verarbeitungskette für autonome Geländefahrzeuge mit Lokalisierung, Kartierung, Planung und Steuerung geschlossen werden kann. Zusammenfassend lässt sich sagen, dass die vorgeschlagene Pipeline flexible Perzeptionslösungen für autonome Geländefahrzeuge bietet und die begleitende Validierung eine exakte und vertrauenswürdige Perzeption unstrukturierter Umgebungen gewährleistet

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection

    Vision Based Collaborative Localization and Path Planning for Micro Aerial Vehicles

    Get PDF
    Autonomous micro aerial vehicles (MAV) have gained immense popularity in both the commercial and research worlds over the last few years. Due to their small size and agility, MAVs are considered to have great potential for civil and industrial tasks such as photography, search and rescue, exploration, inspection and surveillance. Autonomy on MAVs usually involves solving the major problems of localization and path planning. While GPS is a popular choice for localization for many MAV platforms today, it suffers from issues such as inaccurate estimation around large structures, and complete unavailability in remote areas/indoor scenarios. From the alternative sensing mechanisms, cameras arise as an attractive choice to be an onboard sensor due to the richness of information captured, along with small size and inexpensiveness. Another consideration that comes into picture for micro aerial vehicles is the fact that these small platforms suffer from inability to fly for long amounts of time or carry heavy payload, scenarios that can be solved by allocating a group, or a swarm of MAVs to perform a task than just one. Collaboration between multiple vehicles allows for better accuracy of estimation, task distribution and mission efficiency. Combining these rationales, this dissertation presents collaborative vision based localization and path planning frameworks. Although these were created as two separate steps, the ideal application would contain both of them as a loosely coupled localization and planning algorithm. A forward-facing monocular camera onboard each MAV is considered as the sole sensor for computing pose estimates. With this minimal setup, this dissertation first investigates methods to perform feature-based localization, with the possibility of fusing two types of localization data: one that is computed onboard each MAV, and the other that comes from relative measurements between the vehicles. Feature based methods were preferred over direct methods for vision because of the relative ease with which tangible data packets can be transferred between vehicles, and because feature data allows for minimal data transfer compared to large images. Inspired by techniques from multiple view geometry and structure from motion, this localization algorithm presents a decentralized full 6-degree of freedom pose estimation method complete with a consistent fusion methodology to obtain robust estimates only at discrete instants, thus not requiring constant communication between vehicles. This method was validated on image data obtained from high fidelity simulations as well as real life MAV tests. These vision based collaborative constraints were also applied to the problem of path planning with a focus on performing uncertainty-aware planning, where the algorithm is responsible for generating not only a valid, collision-free path, but also making sure that this path allows for successful localization throughout. As joint multi-robot planning can be a computationally intractable problem, planning was divided into two steps from a vision-aware perspective. As the first step for improving localization performance is having access to a better map of features, a next-best-multi-view algorithm was developed which can compute the best viewpoints for multiple vehicles that can improve an existing sparse reconstruction. This algorithm contains a cost function containing vision-based heuristics that determines the quality of expected images from any set of viewpoints; which is minimized through an efficient evolutionary strategy known as Covariance Matrix Adaption (CMA-ES) that can handle very high dimensional sample spaces. In the second step, a sampling based planner called Vision-Aware RRT* (VA-RRT*) was developed which includes similar vision heuristics in an information gain based framework in order to drive individual vehicles towards areas that can benefit feature tracking and thus localization. Both steps of the planning framework were tested and validated using results from simulation

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space
    • …
    corecore