88 research outputs found

    Real-time performance-focused on localisation techniques for autonomous vehicle: a review

    Get PDF

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection

    Alpha-N: Shortest Path Finder Automated Delivery Robot with Obstacle Detection and Avoiding System

    Full text link
    Alpha N A self-powered, wheel driven Automated Delivery Robot is presented in this paper. The ADR is capable of navigating autonomously by detecting and avoiding objects or obstacles in its path. It uses a vector map of the path and calculates the shortest path by Grid Count Method of Dijkstra Algorithm. Landmark determination with Radio Frequency Identification tags are placed in the path for identification and verification of source and destination, and also for the recalibration of the current position. On the other hand, an Object Detection Module is built by Faster RCNN with VGGNet16 architecture for supporting path planning by detecting and recognizing obstacles. The Path Planning System is combined with the output of the GCM, the RFID Reading System and also by the binary results of ODM. This PPS requires a minimum speed of 200 RPM and 75 seconds duration for the robot to successfully relocate its position by reading an RFID tag. In the result analysis phase, the ODM exhibits an accuracy of 83.75 percent, RRS shows 92.3 percent accuracy and the PPS maintains an accuracy of 85.3 percent. Stacking all these 3 modules, the ADR is built, tested and validated which shows significant improvement in terms of performance and usability comparing with other service robots.Comment: 12 pages, 7 figures, To be appear in the proceedings of 12th Asian Conference on Intelligent Information and Database Systems 23-26 March 2020 Phuket, Thailan

    Localization and Mapping for Autonomous Driving: Fault Detection and Reliability Analysis

    Full text link
    Autonomous driving has advanced rapidly during the past decades and has expanded its application for multiple fields, both indoor and outdoor. One of the significant issues associated with a highly automated vehicle (HAV) is how to increase the safety level. A key requirement to ensure the safety of automated driving is the ability of reliable localization and navigation, with which intelligent vehicle/robot systems could successfully make reliable decisions for the driving path or react to the sudden events occurring within the path. A map with rich environment information is essential to support autonomous driving system to meet these high requirements. Therefore, multi-sensor-based localization and mapping methods are studied in this Thesis. Although some studies have been conducted in this area, a full quality control scheme to guarantee the reliability and to detect outliers in localization and mapping systems is still lacking. The quality of the integration system has not been sufficiently evaluated. In this research, an extended Kalman filter and smoother based quality control (EKF/KS QC) scheme is investigated and has been successfully applied for different localization and mapping scenarios. An EKF/KS QC toolbox is developed in MATLAB, which can be easily embedded and applied into different localization and mapping scenarios. The major contributions of this research are: a) The equivalence between least squares and smoothing is discussed, and an extended Kalman filter-smoother quality control method is developed according to this equivalence, which can not only be used to deal with system model outlier with detection, and identification, can also be used to analyse, control and improve the system quality. Relevant mathematical models of this quality control method have been developed to deal with issues such as singular measurement covariance matrices, and numerical instability of smoothing. b) Quality control analysis is conducted for different positioning system, including Global Navigation Satellite System (GNSS) multi constellation integration for both Real Time Kinematic (RTK) and Post Processing Kinematic (PPK), and the integration of GNSS and Inertial Navigation System (INS). The results indicate PPK method can provide more reliable positioning results than RTK. With the proposed quality control method, the influence of the detected outlier can be mitigated by directly correcting the input measurement with the estimated outlier value, or by adapting the final estimation results with the estimated outlier’s influence value. c) Mathematical modelling and quality control aspects for online simultaneous localization and mapping (SLAM) are examined. A smoother based offline SLAM method is investigated with quality control. Both outdoor and indoor datasets have been tested with these SLAM methods. Geometry analysis for the SLAM system has been done according to the quality control results. The system reliability analysis is essential for the SLAM designer as it can be conducted at the early stage without real-world measurement. d) A least squares based localization method is proposed that treats the High-Definition (HD) map as a sensor source. This map-based sensor information is integrated with other perception sensors, which significantly improves localization efficiency and accuracy. Geometry analysis is undertaken with the quality measures to analyse the influence of the geometry upon the estimation solution and the system quality, which can be hints for future design of the localization system. e) A GNSS/INS aided LiDAR mapping and localization procedure is developed. A high-density map is generated offline, then, LiDAR-based localization can be undertaken online with this pre-generated map. Quality control is conducted for this system. The results demonstrate that the LiDAR based localization within map can effectively improve the accuracy and reliability compared to the GNSS/INS only system, especially during the period that GNSS signal is lost

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Machine Learning Algorithms for Robotic Navigation and Perception and Embedded Implementation Techniques

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Collaborative autonomy in heterogeneous multi-robot systems

    Get PDF
    As autonomous mobile robots become increasingly connected and widely deployed in different domains, managing multiple robots and their interaction is key to the future of ubiquitous autonomous systems. Indeed, robots are not individual entities anymore. Instead, many robots today are deployed as part of larger fleets or in teams. The benefits of multirobot collaboration, specially in heterogeneous groups, are multiple. Significantly higher degrees of situational awareness and understanding of their environment can be achieved when robots with different operational capabilities are deployed together. Examples of this include the Perseverance rover and the Ingenuity helicopter that NASA has deployed in Mars, or the highly heterogeneous robot teams that explored caves and other complex environments during the last DARPA Sub-T competition. This thesis delves into the wide topic of collaborative autonomy in multi-robot systems, encompassing some of the key elements required for achieving robust collaboration: solving collaborative decision-making problems; securing their operation, management and interaction; providing means for autonomous coordination in space and accurate global or relative state estimation; and achieving collaborative situational awareness through distributed perception and cooperative planning. The thesis covers novel formation control algorithms, and new ways to achieve accurate absolute or relative localization within multi-robot systems. It also explores the potential of distributed ledger technologies as an underlying framework to achieve collaborative decision-making in distributed robotic systems. Throughout the thesis, I introduce novel approaches to utilizing cryptographic elements and blockchain technology for securing the operation of autonomous robots, showing that sensor data and mission instructions can be validated in an end-to-end manner. I then shift the focus to localization and coordination, studying ultra-wideband (UWB) radios and their potential. I show how UWB-based ranging and localization can enable aerial robots to operate in GNSS-denied environments, with a study of the constraints and limitations. I also study the potential of UWB-based relative localization between aerial and ground robots for more accurate positioning in areas where GNSS signals degrade. In terms of coordination, I introduce two new algorithms for formation control that require zero to minimal communication, if enough degree of awareness of neighbor robots is available. These algorithms are validated in simulation and real-world experiments. The thesis concludes with the integration of a new approach to cooperative path planning algorithms and UWB-based relative localization for dense scene reconstruction using lidar and vision sensors in ground and aerial robots

    Recent Advances in Indoor Localization Systems and Technologies

    Get PDF
    Despite the enormous technical progress seen in the past few years, the maturity of indoor localization technologies has not yet reached the level of GNSS solutions. The 23 selected papers in this book present the recent advances and new developments in indoor localization systems and technologies, propose novel or improved methods with increased performance, provide insight into various aspects of quality control, and also introduce some unorthodox positioning methods

    New Approach of Indoor and Outdoor Localization Systems

    Get PDF
    Accurate determination of the mobile position constitutes the basis of many new applications. This book provides a detailed account of wireless systems for positioning, signal processing, radio localization techniques (Time Difference Of Arrival), performances evaluation, and localization applications. The first section is dedicated to Satellite systems for positioning like GPS, GNSS. The second section addresses the localization applications using the wireless sensor networks. Some techniques are introduced for localization systems, especially for indoor positioning, such as Ultra Wide Band (UWB), WIFI. The last section is dedicated to Coupled GPS and other sensors. Some results of simulations, implementation and tests are given to help readers grasp the presented techniques. This is an ideal book for students, PhD students, academics and engineers in the field of Communication, localization & Signal Processing, especially in indoor and outdoor localization domains

    A Comprehensive Introduction of Visual-Inertial Navigation

    Full text link
    In this article, a tutorial introduction to visual-inertial navigation(VIN) is presented. Visual and inertial perception are two complementary sensing modalities. Cameras and inertial measurement units (IMU) are the corresponding sensors for these two modalities. The low cost and light weight of camera-IMU sensor combinations make them ubiquitous in robotic navigation. Visual-inertial Navigation is a state estimation problem, that estimates the ego-motion and local environment of the sensor platform. This paper presents visual-inertial navigation in the classical state estimation framework, first illustrating the estimation problem in terms of state variables and system models, including related quantities representations (Parameterizations), IMU dynamic and camera measurement models, and corresponding general probabilistic graphical models (Factor Graph). Secondly, we investigate the existing model-based estimation methodologies, these involve filter-based and optimization-based frameworks and related on-manifold operations. We also discuss the calibration of some relevant parameters, also initialization of state of interest in optimization-based frameworks. Then the evaluation and improvement of VIN in terms of accuracy, efficiency, and robustness are discussed. Finally, we briefly mention the recent development of learning-based methods that may become alternatives to traditional model-based methods.Comment: 35 pages, 10 figure
    • …
    corecore