5,823 research outputs found

    A Flexible Modeling Approach for Robust Multi-Lane Road Estimation

    Full text link
    A robust estimation of road course and traffic lanes is an essential part of environment perception for next generations of Advanced Driver Assistance Systems and development of self-driving vehicles. In this paper, a flexible method for modeling multiple lanes in a vehicle in real time is presented. Information about traffic lanes, derived by cameras and other environmental sensors, that is represented as features, serves as input for an iterative expectation-maximization method to estimate a lane model. The generic and modular concept of the approach allows to freely choose the mathematical functions for the geometrical description of lanes. In addition to the current measurement data, the previously estimated result as well as additional constraints to reflect parallelism and continuity of traffic lanes, are considered in the optimization process. As evaluation of the lane estimation method, its performance is showcased using cubic splines for the geometric representation of lanes in simulated scenarios and measurements recorded using a development vehicle. In a comparison to ground truth data, robustness and precision of the lanes estimated up to a distance of 120 m are demonstrated. As a part of the environmental modeling, the presented method can be utilized for longitudinal and lateral control of autonomous vehicles

    Multi-Lane Perception Using Feature Fusion Based on GraphSLAM

    Full text link
    An extensive, precise and robust recognition and modeling of the environment is a key factor for next generations of Advanced Driver Assistance Systems and development of autonomous vehicles. In this paper, a real-time approach for the perception of multiple lanes on highways is proposed. Lane markings detected by camera systems and observations of other traffic participants provide the input data for the algorithm. The information is accumulated and fused using GraphSLAM and the result constitutes the basis for a multilane clothoid model. To allow incorporation of additional information sources, input data is processed in a generic format. Evaluation of the method is performed by comparing real data, collected with an experimental vehicle on highways, to a ground truth map. The results show that ego and adjacent lanes are robustly detected with high quality up to a distance of 120 m. In comparison to serial lane detection, an increase in the detection range of the ego lane and a continuous perception of neighboring lanes is achieved. The method can potentially be utilized for the longitudinal and lateral control of self-driving vehicles

    Deep Learning in Lane Marking Detection: A Survey

    Get PDF
    Lane marking detection is a fundamental but crucial step in intelligent driving systems. It can not only provide relevant road condition information to prevent lane departure but also assist vehicle positioning and forehead car detection. However, lane marking detection faces many challenges, including extreme lighting, missing lane markings, and obstacle obstructions. Recently, deep learning-based algorithms draw much attention in intelligent driving society because of their excellent performance. In this paper, we review deep learning methods for lane marking detection, focusing on their network structures and optimization objectives, the two key determinants of their success. Besides, we summarize existing lane-related datasets, evaluation criteria, and common data processing techniques. We also compare the detection performance and running time of various methods, and conclude with some current challenges and future trends for deep learning-based lane marking detection algorithm

    Recognizing Features in Mobile Laser Scanning Point Clouds Towards 3D High-definition Road Maps for Autonomous Vehicles

    Get PDF
    The sensors mounted on a driverless vehicle are not always reliable for precise localization and navigation. By comparing the real-time sensory data with a priori map, the autonomous navigation system can transform the complicated sensor perception mission into a simple map-based localization task. However, the lack of robust solutions and standards for creating such lane-level high-definition road maps is a major challenge in this emerging field. This thesis presents a semi-automated method for extracting meaningful road features from mobile laser scanning (MLS) point clouds and creating 3D high-definition road maps for autonomous vehicles. After pre-processing steps including coordinate system transformation and non-ground point removal, a road edge detection algorithm is performed to distinguish road curbs and extract road surfaces followed by extraction of two categories of road markings. On the one hand, textual and directional road markings including arrows, symbols, and words are detected by intensity thresholding and conditional Euclidean clustering. On the other hand, lane markings (lines) are extracted by local intensity analysis and distance thresholding according to road design standards. Afterwards, centerline points in every single lane are estimated based on the position of the extracted lane markings. Ultimately, 3D road maps with precise road boundaries, road markings, and the estimated lane centerlines are created. The experimental results demonstrate the feasibility of the proposed method, which can accurately extract most road features from the MLS point clouds. The average recall, precision, and F1-score obtained from four datasets for road marking extraction are 93.87%, 93.76%, and 93.73%, respectively. All of the estimated lane centerlines are validated using the “ground truthing” data manually digitized from the 4 cm resolution UAV orthoimages. The results of a comparison study show the better performance of the proposed method than that of some other existing methods
    • …
    corecore