143 research outputs found
A novel low-cost autonomous 3D LIDAR system
Thesis (M.S.) University of Alaska Fairbanks, 2018To aid in humanity's efforts to colonize alien worlds, NASA's Robotic Mining Competition pits universities against one another to design autonomous mining robots that can extract the materials necessary for producing oxygen, water, fuel, and infrastructure. To mine autonomously on the uneven terrain, the robot must be able to produce a 3D map of its surroundings and navigate around obstacles. However, sensors that can be used for 3D mapping are typically expensive, have high computational requirements, and/or are designed primarily for indoor use. This thesis describes the creation of a novel low-cost 3D mapping system utilizing a pair of rotating LIDAR sensors, attached to a mobile testing platform. Also, the use of this system for 3D obstacle detection and navigation is shown. Finally, the use of deep learning to improve the scanning efficiency of the sensors is investigated.Chapter 1. Introduction -- 1.1. Purpose -- 1.2. 3D sensors -- 1.2.1. Cameras -- 1.2.2. RGB-D Cameras -- 1.2.3. LIDAR -- 1.3. Overview of Work and Contributions -- 1.4. Multi-LIDAR and Rotating LIDAR Systems -- 1.5. Thesis Organization. Chapter 2. Hardware -- 2.1. Overview -- 2.2. Components -- 2.2.1. Revo Laser Distance Sensor -- 2.2.2. Dynamixel AX-12A Smart Serial Servo -- 2.2.3. Bosch BNO055 Inertial Measurement Unit -- 2.2.4. STM32F767ZI Microcontroller and LIDAR Interface Boards -- 2.2.5. Create 2 Programmable Mobile Robotic Platform -- 2.2.6. Acer C720 Chromebook and Genius Webcam -- 2.3. System Assembly -- 2.3.1. 3D LIDAR Module -- 2.3.2. Full Assembly. Chapter 3. Software -- 3.1. Robot Operating System -- 3.2. Frames of Reference -- 3.3. System Overview -- 3.4. Microcontroller Firmware -- 3.5. PC-Side Point Cloud Fusion -- 3.6. Localization System -- 3.6.1. Fusion of Wheel Odometry and IMU Data -- 3.6.2. ArUco Marker Localization -- 3.6.3. ROS Navigation Stack: Overview & Configuration -- 3.6.3.1. Costmaps -- 3.6.3.2. Path Planners. Chapter 4. System Performance -- 4.1. VS-LIDAR Characteristics -- 4.2. Odometry Tests -- 4.3. Stochastic Scan Dithering -- 4.4. Obstacle Detection Test -- 4.5. Navigation Tests -- 4.6. Detection of Black Obstacles -- 4.7. Performance in Sunlit Environments -- 4.8. Distance Measurement Comparison. Chapter 5. Case Study: Adaptive Scan Dithering -- 5.1. Introduction -- 5.2. Adaptive Scan Dithering Process Overview -- 5.3. Coverage Metrics -- 5.4. Reward Function -- 5.5. Network Configuration -- 5.6. Performance and Remarks. Chapter 6. Conclusions and Future Work -- 6.1. Conclusions -- 6.2. Future Work -- 6.3. Lessons Learned -- References
Recommendation on use of wind lidars
The 15 Early Stage Researchers (ESRs) in the LIKE project investigate topics in which wind lidar play a significant role. This report provides the ESRs an introductory reading and gives a short introduction into the basic principles, as well as an overview on the practical application of lidar wind measurement technology for a wide range of research fields, including a corresponding literature review. Wherever possible, it will also give the ESRs recommendations on the use of lidars and related best practices and provide corresponding state-of-the-art documents in the attachment.publishedVersio
Recommendation on use of wind lidars
The 15 Early Stage Researchers (ESRs) in the LIKE project investigate topics in which wind lidar play a significant role. This report provides the ESRs an introductory reading and gives a short introduction into the basic principles, as well as an overview on the practical application of lidar wind measurement technology for a wide range of research fields, including a corresponding literature review. Wherever possible, it will also give the ESRs recommendations on the use of lidars and related best practices and provide corresponding state-of-the-art documents in the attachment.publishedVersio
Using Lidar Intensity for Robot Navigation
We present Multi-Layer Intensity Map, a novel 3D object representation for
robot perception and autonomous navigation. Intensity maps consist of multiple
stacked layers of 2D grid maps each derived from reflected point cloud
intensities corresponding to a certain height interval. The different layers of
intensity maps can be used to simultaneously estimate obstacles' height,
solidity/density, and opacity. We demonstrate that intensity maps' can help
accurately differentiate obstacles that are safe to navigate through (e.g.
beaded/string curtains, pliable tall grass), from ones that must be avoided
(e.g. transparent surfaces such as glass walls, bushes, trees, etc.) in indoor
and outdoor environments. Further, to handle narrow passages, and navigate
through non-solid obstacles in dense environments, we propose an approach to
adaptively inflate or enlarge the obstacles detected on intensity maps based on
their solidity, and the robot's preferred velocity direction. We demonstrate
these improved navigation capabilities in real-world narrow, dense environments
using a real Turtlebot and Boston Dynamics Spot robots. We observe significant
increases in success rates to more than 50%, up to a 9.5% decrease in
normalized trajectory length, and up to a 22.6% increase in the F-score
compared to current navigation methods using other sensor modalities.Comment: 9 pages, 7 figure
Fully Onboard Low-Power Localization with Semantic Sensor Fusion on a Nano-UAV using Floor Plans
Nano-sized unmanned aerial vehicles (UAVs) are well-fit for indoor
applications and for close proximity to humans. To enable autonomy, the
nano-UAV must be able to self-localize in its operating environment. This is a
particularly-challenging task due to the limited sensing and compute resources
on board. This work presents an online and onboard approach for localization in
floor plans annotated with semantic information. Unlike sensor-based maps,
floor plans are readily-available, and do not increase the cost and time of
deployment. To overcome the difficulty of localizing in sparse maps, the
proposed approach fuses geometric information from miniaturized time-of-flight
sensors and semantic cues. The semantic information is extracted from images by
deploying a state-of-the-art object detection model on a high-performance
multi-core microcontroller onboard the drone, consuming only 2.5mJ per frame
and executing in 38ms. In our evaluation, we globally localize in a real-world
office environment, achieving 90% success rate. We also release an open-source
implementation of our work.Comment: Under review for ICRA 2024, 7 page
- …