930 research outputs found

    Where Should We Place LiDARs on the Autonomous Vehicle? - An Optimal Design Approach

    Full text link
    Autonomous vehicle manufacturers recognize that LiDAR provides accurate 3D views and precise distance measures under highly uncertain driving conditions. Its practical implementation, however, remains costly. This paper investigates the optimal LiDAR configuration problem to achieve utility maximization. We use the perception area and non-detectable subspace to construct the design procedure as solving a min-max optimization problem and propose a bio-inspired measure -- volume to surface area ratio (VSR) -- as an easy-to-evaluate cost function representing the notion of the size of the non-detectable subspaces of a given configuration. We then adopt a cuboid-based approach to show that the proposed VSR-based measure is a well-suited proxy for object detection rate. It is found that the Artificial Bee Colony evolutionary algorithm yields a tractable cost function computation. Our experiments highlight the effectiveness of our proposed VSR measure in terms of cost-effectiveness configuration as well as providing insightful analyses that can improve the design of AV systems.Comment: 7 pages including the references, accepted by International Conference on Robotics and Automation (ICRA), 201

    Analyzing Infrastructure LiDAR Placement with Realistic LiDAR Simulation Library

    Full text link
    Recently, Vehicle-to-Everything(V2X) cooperative perception has attracted increasing attention. Infrastructure sensors play a critical role in this research field; however, how to find the optimal placement of infrastructure sensors is rarely studied. In this paper, we investigate the problem of infrastructure sensor placement and propose a pipeline that can efficiently and effectively find optimal installation positions for infrastructure sensors in a realistic simulated environment. To better simulate and evaluate LiDAR placement, we establish a Realistic LiDAR Simulation library that can simulate the unique characteristics of different popular LiDARs and produce high-fidelity LiDAR point clouds in the CARLA simulator. Through simulating point cloud data in different LiDAR placements, we can evaluate the perception accuracy of these placements using multiple detection models. Then, we analyze the correlation between the point cloud distribution and perception accuracy by calculating the density and uniformity of regions of interest. Experiments show that when using the same number and type of LiDAR, the placement scheme optimized by our proposed method improves the average precision by 15%, compared with the conventional placement scheme in the standard lane scene. We also analyze the correlation between perception performance in the region of interest and LiDAR point cloud distribution and validate that density and uniformity can be indicators of performance. Both the RLS Library and related code will be released at https://github.com/PJLab-ADG/LiDARSimLib-and-Placement-Evaluation.Comment: 7 pages, 6 figures, accepted to the IEEE International Conference on Robotics and Automation (ICRA'23

    Investigating the Impact of Multi-LiDAR Placement on Object Detection for Autonomous Driving

    Full text link
    The past few years have witnessed an increasing interest in improving the perception performance of LiDARs on autonomous vehicles. While most of the existing works focus on developing new deep learning algorithms or model architectures, we study the problem from the physical design perspective, i.e., how different placements of multiple LiDARs influence the learning-based perception. To this end, we introduce an easy-to-compute information-theoretic surrogate metric to quantitatively and fast evaluate LiDAR placement for 3D detection of different types of objects. We also present a new data collection, detection model training and evaluation framework in the realistic CARLA simulator to evaluate disparate multi-LiDAR configurations. Using several prevalent placements inspired by the designs of self-driving companies, we show the correlation between our surrogate metric and object detection performance of different representative algorithms on KITTI through extensive experiments, validating the effectiveness of our LiDAR placement evaluation approach. Our results show that sensor placement is non-negligible in 3D point cloud-based object detection, which will contribute up to 10% performance discrepancy in terms of average precision in challenging 3D object detection settings. We believe that this is one of the first studies to quantitatively investigate the influence of LiDAR placement on perception performance. The code is available on https://github.com/HanjiangHu/Multi-LiDAR-Placement-for-3D-Detection.Comment: CVPR 2022 camera-ready version:15 pages, 14 figures, 9 table

    Team MIT Urban Challenge Technical Report

    Get PDF
    This technical report describes Team MITs approach to theDARPA Urban Challenge. We have developed a novel strategy forusing many inexpensive sensors, mounted on the vehicle periphery,and calibrated with a new cross-­modal calibrationtechnique. Lidar, camera, and radar data streams are processedusing an innovative, locally smooth state representation thatprovides robust perception for real­ time autonomous control. Aresilient planning and control architecture has been developedfor driving in traffic, comprised of an innovative combination ofwell­proven algorithms for mission planning, situationalplanning, situational interpretation, and trajectory control. These innovations are being incorporated in two new roboticvehicles equipped for autonomous driving in urban environments,with extensive testing on a DARPA site visit course. Experimentalresults demonstrate all basic navigation and some basic trafficbehaviors, including unoccupied autonomous driving, lanefollowing using pure-­pursuit control and our local frameperception strategy, obstacle avoidance using kino-­dynamic RRTpath planning, U-­turns, and precedence evaluation amongst othercars at intersections using our situational interpreter. We areworking to extend these approaches to advanced navigation andtraffic scenarios

    Infrastructure based communication architecture to facilitate autonomous driving and communications

    Get PDF
    Abstract. The traditional autonomous vehicle (AV) architecture places a heavy burden on graphics processing units of the vehicle due to heavy signal processing requirements. Ultimately this results in performance degradation in AVs. This is mainly due to advanced sensors, which enable the vision for AVs, like Light Detection and Ranging (LiDAR), radars and cameras. In most of the AV models accepted by many leading automobile companies, LiDAR plays a significant role. It generates a high definition (HD) point cloud of the surroundings to obtain a precise map. AV makes decisions based on that by processing Terabyte (Tb) scale data within the AV. Still, vehicle-mounted LiDARs are not capable of providing information beyond a human driver’s vision. To provide a solution for the above-mentioned drawbacks of the traditional AVs, we propose an infrastructure based communication architecture to facilitate autonomous driving and communications. A set of coordinated LiDAR modules with integrated transceivers, which are mounted at an elevation with a bird’s eye view, can provide a much larger field of vision (FoV). Decisions are taken from a centralized body. We prove the technical feasibility of the system from sensing and communication point of view. The proposed architecture can play a supportive role with traditional AV architectures and it can be applied to many cases such as to automate harbours and factory floors. In the second part of the thesis, we address a resource allocation problem with ultra-reliable and low latency communication (URLLC) for a factory floor. We have analytically proven the capability of the proposed system to establish a reliable (packet error probability less than 10^(-5)) and low latency (less than 1 ms transmission delay) links with sufficient throughput (kilobit scale) using a convex optimization problem. Latency, throughput and reliability variations are studied under the short packet transmission of the proposed system

    A novel low-cost autonomous 3D LIDAR system

    Get PDF
    Thesis (M.S.) University of Alaska Fairbanks, 2018To aid in humanity's efforts to colonize alien worlds, NASA's Robotic Mining Competition pits universities against one another to design autonomous mining robots that can extract the materials necessary for producing oxygen, water, fuel, and infrastructure. To mine autonomously on the uneven terrain, the robot must be able to produce a 3D map of its surroundings and navigate around obstacles. However, sensors that can be used for 3D mapping are typically expensive, have high computational requirements, and/or are designed primarily for indoor use. This thesis describes the creation of a novel low-cost 3D mapping system utilizing a pair of rotating LIDAR sensors, attached to a mobile testing platform. Also, the use of this system for 3D obstacle detection and navigation is shown. Finally, the use of deep learning to improve the scanning efficiency of the sensors is investigated.Chapter 1. Introduction -- 1.1. Purpose -- 1.2. 3D sensors -- 1.2.1. Cameras -- 1.2.2. RGB-D Cameras -- 1.2.3. LIDAR -- 1.3. Overview of Work and Contributions -- 1.4. Multi-LIDAR and Rotating LIDAR Systems -- 1.5. Thesis Organization. Chapter 2. Hardware -- 2.1. Overview -- 2.2. Components -- 2.2.1. Revo Laser Distance Sensor -- 2.2.2. Dynamixel AX-12A Smart Serial Servo -- 2.2.3. Bosch BNO055 Inertial Measurement Unit -- 2.2.4. STM32F767ZI Microcontroller and LIDAR Interface Boards -- 2.2.5. Create 2 Programmable Mobile Robotic Platform -- 2.2.6. Acer C720 Chromebook and Genius Webcam -- 2.3. System Assembly -- 2.3.1. 3D LIDAR Module -- 2.3.2. Full Assembly. Chapter 3. Software -- 3.1. Robot Operating System -- 3.2. Frames of Reference -- 3.3. System Overview -- 3.4. Microcontroller Firmware -- 3.5. PC-Side Point Cloud Fusion -- 3.6. Localization System -- 3.6.1. Fusion of Wheel Odometry and IMU Data -- 3.6.2. ArUco Marker Localization -- 3.6.3. ROS Navigation Stack: Overview & Configuration -- 3.6.3.1. Costmaps -- 3.6.3.2. Path Planners. Chapter 4. System Performance -- 4.1. VS-LIDAR Characteristics -- 4.2. Odometry Tests -- 4.3. Stochastic Scan Dithering -- 4.4. Obstacle Detection Test -- 4.5. Navigation Tests -- 4.6. Detection of Black Obstacles -- 4.7. Performance in Sunlit Environments -- 4.8. Distance Measurement Comparison. Chapter 5. Case Study: Adaptive Scan Dithering -- 5.1. Introduction -- 5.2. Adaptive Scan Dithering Process Overview -- 5.3. Coverage Metrics -- 5.4. Reward Function -- 5.5. Network Configuration -- 5.6. Performance and Remarks. Chapter 6. Conclusions and Future Work -- 6.1. Conclusions -- 6.2. Future Work -- 6.3. Lessons Learned -- References

    An overview of lidar imaging systems for autonomous vehicles

    Get PDF
    Lidar imaging systems are one of the hottest topics in the optronics industry. The need to sense the surroundings of every autonomous vehicle has pushed forward a race dedicated to deciding the final solution to be implemented. However, the diversity of state-of-the-art approaches to the solution brings a large uncertainty on the decision of the dominant final solution. Furthermore, the performance data of each approach often arise from different manufacturers and developers, which usually have some interest in the dispute. Within this paper, we intend to overcome the situation by providing an introductory, neutral overview of the technology linked to lidar imaging systems for autonomous vehicles, and its current state of development. We start with the main single-point measurement principles utilized, which then are combined with different imaging strategies, also described in the paper. An overview of the features of the light sources and photodetectors specific to lidar imaging systems most frequently used in practice is also presented. Finally, a brief section on pending issues for lidar development in autonomous vehicles has been included, in order to present some of the problems which still need to be solved before implementation may be considered as final. The reader is provided with a detailed bibliography containing both relevant books and state-of-the-art papers for further progress in the subject.Peer ReviewedPostprint (published version

    Physics-based Simulation of Continuous-Wave LIDAR for Localization, Calibration and Tracking

    Full text link
    Light Detection and Ranging (LIDAR) sensors play an important role in the perception stack of autonomous robots, supplying mapping and localization pipelines with depth measurements of the environment. While their accuracy outperforms other types of depth sensors, such as stereo or time-of-flight cameras, the accurate modeling of LIDAR sensors requires laborious manual calibration that typically does not take into account the interaction of laser light with different surface types, incidence angles and other phenomena that significantly influence measurements. In this work, we introduce a physically plausible model of a 2D continuous-wave LIDAR that accounts for the surface-light interactions and simulates the measurement process in the Hokuyo URG-04LX LIDAR. Through automatic differentiation, we employ gradient-based optimization to estimate model parameters from real sensor measurements.Comment: Published at ICRA 202

    Sensor System for Rescue Robots

    Get PDF
    A majority of rescue worker fatalities are a result of on-scene responses. Existing technologies help assist the first responders in scenarios of no light, and there even exist robots that can navigate radioactive areas. However, none are able to be both quickly deployable and enter hard to reach or unsafe areas in an emergency event such as an earthquake or storm that damages a structure. In this project we created a sensor platform system to augment existing robotic solutions so that rescue workers can search for people in danger while avoiding preventable injury or death and saving time and resources. Our results showed that we were able to map out a 2D map of the room with updates for robot motion on a display while also showing a live thermal image in front of the system. The system is also capable of taking a digital picture from a triggering event and then displaying it on the computer screen. We discovered that data transfer plays a huge role in making different programs like Arduino and Processing interact with each other. Consequently, this needs to be accounted for when improving our project. In particular our project is wired right now but should deliver data wirelessly to be of any practical use. Furthermore, we dipped our feet into SLAM technologies and if our project were to become autonomous, more research into the algorithms would make this autonomy feasible
    • …
    corecore