56 research outputs found

    A Survey on Odometry for Autonomous Navigation Systems

    Get PDF
    The development of a navigation system is one of the major challenges in building a fully autonomous platform. Full autonomy requires a dependable navigation capability not only in a perfect situation with clear GPS signals but also in situations, where the GPS is unreliable. Therefore, self-contained odometry systems have attracted much attention recently. This paper provides a general and comprehensive overview of the state of the art in the field of self-contained, i.e., GPS denied odometry systems, and identifies the out-coming challenges that demand further research in future. Self-contained odometry methods are categorized into five main types, i.e., wheel, inertial, laser, radar, and visual, where such categorization is based on the type of the sensor data being used for the odometry. Most of the research in the field is focused on analyzing the sensor data exhaustively or partially to extract the vehicle pose. Different combinations and fusions of sensor data in a tightly/loosely coupled manner and with filtering or optimizing fusion method have been investigated. We analyze the advantages and weaknesses of each approach in terms of different evaluation metrics, such as performance, response time, energy efficiency, and accuracy, which can be a useful guideline for researchers and engineers in the field. In the end, some future research challenges in the field are discussed

    Fail-aware LIDAR-based odometry for autonomous vehicles

    Get PDF
    Autonomous driving systems are set to become a reality in transport systems and, so, maximum acceptance is being sought among users. Currently, the most advanced architectures require driver intervention when functional system failures or critical sensor operations take place, presenting problems related to driver state, distractions, fatigue, and other factors that prevent safe control. Therefore, this work presents a redundant, accurate, robust, and scalable LiDAR odometry system with fail-aware system features that can allow other systems to perform a safe stop manoeuvre without driver mediation. All odometry systems have drift error, making it difficult to use them for localisation tasks over extended periods. For this reason, the paper presents an accurate LiDAR odometry system with a fail-aware indicator. This indicator estimates a time window in which the system manages the localisation tasks appropriately. The odometry error is minimised by applying a dynamic 6-DoF model and fusing measures based on the Iterative Closest Points (ICP), environment feature extraction, and Singular Value Decomposition (SVD) methods. The obtained results are promising for two reasons: First, in the KITTI odometry data set, the ranking achieved by the proposed method is twelfth, considering only LiDAR-based methods, where its translation and rotation errors are 1.00% and 0.0041 deg/m, respectively. Second, the encouraging results of the fail-aware indicator demonstrate the safety of the proposed LiDAR odometry system. The results depict that, in order to achieve an accurate odometry system, complex models and measurement fusion techniques must be used to improve its behaviour. Furthermore, if an odometry system is to be used for redundant localisation features, it must integrate a fail-aware indicator for use in a safe manner

    Real-time performance-focused on localisation techniques for autonomous vehicle: a review

    Get PDF

    Design of a Robotic Inspection Platform for Structural Health Monitoring

    Get PDF
    Actively monitoring infrastructure is key to detecting and correcting problems before they become costly. The vast scale of modern infrastructure poses a challenge to monitoring due to insufficient personnel. Certain structures, such as refineries, pose additional challenges and can be expensive, time-consuming, and hazardous to inspect. This thesis outlines the development of an autonomous robot for structural-health-monitoring. The robot is capable of operating autonomously in level indoor environments and can be controlled manually to traverse difficult terrain. Both visual and lidar SLAM, along with a procedural-mapping technique, allow the robot to capture colored-point-clouds. The robot is successfully able to automate the point cloud collection of straightforward environments such as hallways and empty rooms. While it performs well in these situations, its accuracy suffers in complex environments with variable lighting. More work is needed to create a robust system, but the potential time savings and upgrades make the concept promising

    A novel low-cost autonomous 3D LIDAR system

    Get PDF
    Thesis (M.S.) University of Alaska Fairbanks, 2018To aid in humanity's efforts to colonize alien worlds, NASA's Robotic Mining Competition pits universities against one another to design autonomous mining robots that can extract the materials necessary for producing oxygen, water, fuel, and infrastructure. To mine autonomously on the uneven terrain, the robot must be able to produce a 3D map of its surroundings and navigate around obstacles. However, sensors that can be used for 3D mapping are typically expensive, have high computational requirements, and/or are designed primarily for indoor use. This thesis describes the creation of a novel low-cost 3D mapping system utilizing a pair of rotating LIDAR sensors, attached to a mobile testing platform. Also, the use of this system for 3D obstacle detection and navigation is shown. Finally, the use of deep learning to improve the scanning efficiency of the sensors is investigated.Chapter 1. Introduction -- 1.1. Purpose -- 1.2. 3D sensors -- 1.2.1. Cameras -- 1.2.2. RGB-D Cameras -- 1.2.3. LIDAR -- 1.3. Overview of Work and Contributions -- 1.4. Multi-LIDAR and Rotating LIDAR Systems -- 1.5. Thesis Organization. Chapter 2. Hardware -- 2.1. Overview -- 2.2. Components -- 2.2.1. Revo Laser Distance Sensor -- 2.2.2. Dynamixel AX-12A Smart Serial Servo -- 2.2.3. Bosch BNO055 Inertial Measurement Unit -- 2.2.4. STM32F767ZI Microcontroller and LIDAR Interface Boards -- 2.2.5. Create 2 Programmable Mobile Robotic Platform -- 2.2.6. Acer C720 Chromebook and Genius Webcam -- 2.3. System Assembly -- 2.3.1. 3D LIDAR Module -- 2.3.2. Full Assembly. Chapter 3. Software -- 3.1. Robot Operating System -- 3.2. Frames of Reference -- 3.3. System Overview -- 3.4. Microcontroller Firmware -- 3.5. PC-Side Point Cloud Fusion -- 3.6. Localization System -- 3.6.1. Fusion of Wheel Odometry and IMU Data -- 3.6.2. ArUco Marker Localization -- 3.6.3. ROS Navigation Stack: Overview & Configuration -- 3.6.3.1. Costmaps -- 3.6.3.2. Path Planners. Chapter 4. System Performance -- 4.1. VS-LIDAR Characteristics -- 4.2. Odometry Tests -- 4.3. Stochastic Scan Dithering -- 4.4. Obstacle Detection Test -- 4.5. Navigation Tests -- 4.6. Detection of Black Obstacles -- 4.7. Performance in Sunlit Environments -- 4.8. Distance Measurement Comparison. Chapter 5. Case Study: Adaptive Scan Dithering -- 5.1. Introduction -- 5.2. Adaptive Scan Dithering Process Overview -- 5.3. Coverage Metrics -- 5.4. Reward Function -- 5.5. Network Configuration -- 5.6. Performance and Remarks. Chapter 6. Conclusions and Future Work -- 6.1. Conclusions -- 6.2. Future Work -- 6.3. Lessons Learned -- References

    External multi-modal imaging sensor calibration for sensor fusion: A review

    Get PDF
    Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, InnovaciĂłn y Universidades | Ref. PID2019-108816RB-I0

    A New Wave in Robotics: Survey on Recent mmWave Radar Applications in Robotics

    Full text link
    We survey the current state of millimeterwave (mmWave) radar applications in robotics with a focus on unique capabilities, and discuss future opportunities based on the state of the art. Frequency Modulated Continuous Wave (FMCW) mmWave radars operating in the 76--81GHz range are an appealing alternative to lidars, cameras and other sensors operating in the near visual spectrum. Radar has been made more widely available in new packaging classes, more convenient for robotics and its longer wavelengths have the ability to bypass visual clutter such as fog, dust, and smoke. We begin by covering radar principles as they relate to robotics. We then review the relevant new research across a broad spectrum of robotics applications beginning with motion estimation, localization, and mapping. We then cover object detection and classification, and then close with an analysis of current datasets and calibration techniques that provide entry points into radar research.Comment: 19 Pages, 11 Figures, 2 Tables, TRO Submission pendin

    Robust vision based slope estimation and rocks detection for autonomous space landers

    Get PDF
    As future robotic surface exploration missions to other planets, moons and asteroids become more ambitious in their science goals, there is a rapidly growing need to significantly enhance the capabilities of entry, descent and landing technology such that landings can be carried out with pin-point accuracy at previously inaccessible sites of high scientific value. As a consequence of the extreme uncertainty in touch-down locations of current missions and the absence of any effective hazard detection and avoidance capabilities, mission designers must exercise extreme caution when selecting candidate landing sites. The entire landing uncertainty footprint must be placed completely within a region of relatively flat and hazard free terrain in order to minimise the risk of mission ending damage to the spacecraft at touchdown. Consequently, vast numbers of scientifically rich landing sites must be rejected in favour of safer alternatives that may not offer the same level of scientific opportunity. The majority of truly scientifically interesting locations on planetary surfaces are rarely found in such hazard free and easily accessible locations, and so goals have been set for a number of advanced capabilities of future entry, descent and landing technology. Key amongst these is the ability to reliably detect and safely avoid all mission critical surface hazards in the area surrounding a pre-selected landing location. This thesis investigates techniques for the use of a single camera system as the primary sensor in the preliminary development of a hazard detection system that is capable of supporting pin-point landing operations for next generation robotic planetary landing craft. The requirements for such a system have been stated as the ability to detect slopes greater than 5 degrees and surface objects greater than 30cm in diameter. The primary contribution in this thesis, aimed at achieving these goals, is the development of a feature-based,self-initialising, fully adaptive structure from motion (SFM) algorithm based on a robust square-root unscented Kalman filtering framework and the fusion of the resulting SFM scene structure estimates with a sophisticated shape from shading (SFS) algorithm that has the potential to produce very dense and highly accurate digital elevation models (DEMs) that possess sufficient resolution to achieve the sensing accuracy required by next generation landers. Such a system is capable of adapting to potential changes in the external noise environment that may result from intermittent and varying rocket motor thrust and/or sudden turbulence during descent, which may translate to variations in the vibrations experienced by the platform and introduce varying levels of motion blur that will affect the accuracy of image feature tracking algorithms. Accurate scene structure estimates have been obtained using this system from both real and synthetic descent imagery, allowing for the production of accurate DEMs. While some further work would be required in order to produce DEMs that possess the resolution and accuracy needed to determine slopes and the presence of small objects such as rocks at the levels of accuracy required, this thesis presents a very strong foundation upon which to build and goes a long way towards developing a highly robust and accurate solution
    • …
    corecore