663 research outputs found

    FLAT2D: Fast localization from approximate transformation into 2D

    Get PDF
    Many autonomous vehicles require precise localization into a prior map in order to support planning and to leverage semantic information within those maps (e.g. that the right lane is a turn-only lane.) A popular approach in automotive systems is to use infrared intensity maps of the ground surface to localize, making them susceptible to failures when the surface is obscured by snow or when the road is repainted. An emerging alternative is to localize based on the 3D structure around the vehicle; these methods are robust to these types of changes, but the maps are costly both in terms of storage and the computational cost of matching. In this paper, we propose a fast method for localizing based on 3D structure around the vehicle using a 2D representation. This representation retains many of the advantages of "full" matching in 3D, but comes with dramatically lower space and computational requirements. We also introduce a variation of Graph-SLAM tailored to support localization, allowing us to make use of graph-based error-recovery techniques in our localization estimate. Finally, we present real-world localization results for both an indoor mobile robotic platform and an autonomous golf cart, demonstrating that autonomous vehicles do not need full 3D matching to accurately localize in the environment

    The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping

    Full text link
    Many tasks performed by autonomous vehicles such as road marking detection, object tracking, and path planning are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, however, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability. In this paper, we present an adversarial learning approach for generating a significantly improved IPM from a single camera image in real time. The generated bird's-eye-view images contain sharper features (e.g. road markings) and a more homogeneous illumination, while (dynamic) objects are automatically removed from the scene, thus revealing the underlying road layout in an improved fashion. We demonstrate our framework using real-world data from the Oxford RobotCar Dataset and show that scene understanding tasks directly benefit from our boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures, accepted at IV 201

    Milestones in Autonomous Driving and Intelligent Vehicles Part II: Perception and Planning

    Full text link
    Growing interest in autonomous driving (AD) and intelligent vehicles (IVs) is fueled by their promise for enhanced safety, efficiency, and economic benefits. While previous surveys have captured progress in this field, a comprehensive and forward-looking summary is needed. Our work fills this gap through three distinct articles. The first part, a "Survey of Surveys" (SoS), outlines the history, surveys, ethics, and future directions of AD and IV technologies. The second part, "Milestones in Autonomous Driving and Intelligent Vehicles Part I: Control, Computing System Design, Communication, HD Map, Testing, and Human Behaviors" delves into the development of control, computing system, communication, HD map, testing, and human behaviors in IVs. This part, the third part, reviews perception and planning in the context of IVs. Aiming to provide a comprehensive overview of the latest advancements in AD and IVs, this work caters to both newcomers and seasoned researchers. By integrating the SoS and Part I, we offer unique insights and strive to serve as a bridge between past achievements and future possibilities in this dynamic field.Comment: 17pages, 6figures. IEEE Transactions on Systems, Man, and Cybernetics: System

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system

    Methods for Wheel Slip and Sinkage Estimation in Mobile Robots

    Get PDF
    Future outdoor mobile robots will have to explore larger and larger areas, performing difficult tasks, while preserving, at the same time, their safety. This will primarily require advanced sensing and perception capabilities. Video sensors supply contact-free, precise measurements and are flexible devices that can be easily integrated with multi-sensor robotic platforms. Hence, they represent a potential answer to the need of new and improved perception capabilities for autonomous vehicles. One of the main applications of vision in mobile robotics is localization. For mobile robots operating on rough terrain, conventional dead reckoning techniques are not well suited, since wheel slipping, sinkage, and sensor drift may cause localization errors that accumulate without bound during the vehicle’s travel. Conversely, video sensors are exteroceptive devices, that is, they acquire information from the robot’s environment; therefore, vision-based motion estimates are independent of the knowledge of terrain properties and wheel-terrain interaction. Indeed, like dead reckoning, vision could lead to accumulation of errors; however, it has been proved that, compared to dead reckoning, it allows more accurate results and can be considered as a promising solution to the problem of robust robot positioning in high-slip environments. As a consequence, in the last few years, several localization methods using vision have been developed. Among them, visual odometry algorithms, based on the tracking of visual features over subsequent images, have been proved particularly effective. Accurate and reliable methods to sense slippage and sinkage are also desirable, since these effects compromise the vehicle’s traction performance, energy consumption and lead to gradual deviation of the robot from the intended path, possibly resulting in large drift and poor results of localization and control systems. For example, the use of conventional dead-reckoning technique is largely compromised, since it is based on the assumption that wheel revolutions can be translated into correspondent linear displacements. Thus, if one wheel slips, then the associated encoder will register revolutions even though these revolutions do not correspond to a linear displacement of the wheel. Conversely, if one wheel skids, fewer encoder pulses will be counted. Slippage and sinkage measurements are also valuable for terrain identification according to the classical terramechanics theory. This chapter investigates vision-based onboard technology to improve mobility of robots on natural terrain. A visual odometry algorithm and two methods for online measurement of vehicle slip angle and wheel sinkage, respectively, are discussed. Tests results are presented showing the performance of the proposed approaches using an all-terrain rover moving across uneven terrain

    Robust ego-localization using monocular visual odometry

    Get PDF

    Visual computing techniques for automated LIDAR annotation with application to intelligent transport systems

    Get PDF
    106 p.The concept of Intelligent Transport Systems (ITS) refers to the application of communication and information technologies to transport with the aim of making it more efficient, sustainable, and safer. Computer vision is increasingly being used for ITS applications, such as infrastructure management or advanced driver-assistance systems. The latest progress in computer vision, thanks to the Deep Learning techniques, and the race for autonomous vehicle, have created a growing requirement for annotated data in the automotive industry. The data to be annotated is composed by images captured by the cameras of the vehicles and LIDAR data in the form of point clouds. LIDAR sensors are used for tasks such as object detection and localization. The capacity of LIDAR sensors to identify objects at long distances and to provide estimations of their distance make them very appealing sensors for autonomous driving.This thesis presents a method to automate the annotation of lane markings with LIDAR data. The state of the art of lane markings detection based on LIDAR data is reviewed and a novel method is presented. The precision of the method is evaluated against manually annotated data. Its usefulness is also evaluated, measuring the reduction of the required time to annotate new data thanks to the automatically generated pre-annotations. Finally, the conclusions of this thesis and possible future research lines are presented
    • …
    corecore