25 research outputs found

    A Flexible Modeling Approach for Robust Multi-Lane Road Estimation

    Full text link
    A robust estimation of road course and traffic lanes is an essential part of environment perception for next generations of Advanced Driver Assistance Systems and development of self-driving vehicles. In this paper, a flexible method for modeling multiple lanes in a vehicle in real time is presented. Information about traffic lanes, derived by cameras and other environmental sensors, that is represented as features, serves as input for an iterative expectation-maximization method to estimate a lane model. The generic and modular concept of the approach allows to freely choose the mathematical functions for the geometrical description of lanes. In addition to the current measurement data, the previously estimated result as well as additional constraints to reflect parallelism and continuity of traffic lanes, are considered in the optimization process. As evaluation of the lane estimation method, its performance is showcased using cubic splines for the geometric representation of lanes in simulated scenarios and measurements recorded using a development vehicle. In a comparison to ground truth data, robustness and precision of the lanes estimated up to a distance of 120 m are demonstrated. As a part of the environmental modeling, the presented method can be utilized for longitudinal and lateral control of autonomous vehicles

    Multi-Lane Perception Using Feature Fusion Based on GraphSLAM

    Full text link
    An extensive, precise and robust recognition and modeling of the environment is a key factor for next generations of Advanced Driver Assistance Systems and development of autonomous vehicles. In this paper, a real-time approach for the perception of multiple lanes on highways is proposed. Lane markings detected by camera systems and observations of other traffic participants provide the input data for the algorithm. The information is accumulated and fused using GraphSLAM and the result constitutes the basis for a multilane clothoid model. To allow incorporation of additional information sources, input data is processed in a generic format. Evaluation of the method is performed by comparing real data, collected with an experimental vehicle on highways, to a ground truth map. The results show that ego and adjacent lanes are robustly detected with high quality up to a distance of 120 m. In comparison to serial lane detection, an increase in the detection range of the ego lane and a continuous perception of neighboring lanes is achieved. The method can potentially be utilized for the longitudinal and lateral control of self-driving vehicles

    ObjectFlow: A Descriptor for Classifying Traffic Motion

    Get PDF
    Abstract—We present and evaluate a novel scene descriptor for classifying urban traffic by object motion. Atomic 3D flow vectors are extracted and compensated for the vehicle’s egomo-tion, using stereo video sequences. Votes cast by each flow vector are accumulated in a bird’s eye view histogram grid. Since we are directly using low-level object flow, no prior object detection or tracking is needed. We demonstrate the effectiveness of the proposed descriptor by comparing it to two simpler baselines on the task of classifying more than 100 challenging video sequences into intersection and non-intersection scenarios. Our experiments reveal good classification performance in busy traffic situations, making our method a valuable complement to traditional approaches based on lane markings. I

    Probabilistic lane estimation for autonomous driving using basis curves

    Get PDF
    Lane estimation for autonomous driving can be formulated as a curve estimation problem, where local sensor data provides partial and noisy observations of spatial curves forming lane boundaries. The number of lanes to estimate are initially unknown and many observations may be outliers or false detections (due e.g. to shadows or non-boundary road paint). The challenges lie in detecting lanes when and where they exist, and updating lane estimates as new observations are made. This paper describes an efficient probabilistic lane estimation algorithm based on a novel curve representation. The key advance is a principled mechanism to describe many similar curves as variations of a single basis curve. Locally observed road paint and curb features are then fused to detect and estimate all nearby travel lanes. The system handles roads with complex multi-lane geometries and makes no assumptions about the position and orientation of the vehicle with respect to the roadway. We evaluate our algorithm using a ground truth dataset containing manually-labeled, fine-grained lane geometries for vehicle travel in two large and diverse datasets that include more than 300,000 images and 44 km of roadway. The results illustrate the capabilities of our algorithm for robust lane estimation in the face of challenging conditions and unknown roadways.United States. Defense Advanced Research Projects Agency (Urban Challenge, ARPA Order No. W369/00, Program Code DIRO, issued by DARPA/CMO under Contract No. HR0011-06-C-0149

    Portable and Scalable In-vehicle Laboratory Instrumentation for the Design of i-ADAS

    Get PDF
    According to the WHO (World Health Organization), world-wide deaths from injuries are projected to rise from 5.1 million in 1990 to 8.4 million in 2020, with traffic-related incidents as the major cause for this increase. Intelligent, Advanced Driving Assis­ tance Systems (i-ADAS) provide a number of solutions to these safety challenges. We developed a scalable in-vehicle mobile i-ADAS research platform for the purpose of traffic context analysis and behavioral prediction designed for understanding fun­ damental issues in intelligent vehicles. We outline our approach and describe the in-vehicle instrumentation

    LaneMapper: A City-scale Lane Map Generator for Autonomous Driving

    Get PDF
    Autonomous vehicles require lane maps to help navigate from a start to a goal position in a safe, comfortable and quick manner. A lane map represents a set of features inherent to the road, such as lanes, stop signs, traffic lights, and intersections. We present a novel approach to detect multiple lane boundaries and traffic signs to create a 3D city-scale map of the driving environment. We detect, recognize and track lane boundaries with multimodal sensory and prior inputs, such as camera, LiDAR, and GPS/IMU, to assist autonomous driving. We detect and classify traffic signs from the image considering high reflectivity of LiDAR points and further register the locations of traffic signs and lane boundaries together in the world coordinate frame. We have also made our code base open-source for the research community to tweak or use our algorithm for their purposes

    Autonomes Fahren – ein Top-Down-Ansatz

    Get PDF
    This paper presents a functional system architecture for an “autonomous vehicle” in the sense of amodular building block system. It is developed in a topdown approach based on the definition of the functional requirements for an autonomous vehicle and explicitly combines perception-based and localization-based approaches. Both the definition and the functional system architecture consider the aspects operating by the human being, mission accomplishment, map data, localization, environmental and self-perception as well as cooperation. The functional system architecture is developed in the context of the research project “Stadtpilot” at the Technische Universität Braunschweig.In diesem Artikel stellen wir eine funktionale Systemarchitektur für ein “autonom fahrendes Straßenfahrzeug” vor, die im Sinne eines modularen Baukastensystems entworfen ist. Sie wurde in einemTop- Down-Ansatz ausgehend von einerDefinition des Funktionsumfangs eines “autonom fahrenden Straßenfahrzeugs” entwickelt und führt explizit wahrnehmungsbasierte und lokalisierungsbasierte Ansätze zusammen. Sowohl dieDefinition des Funktionsumfanges als auch die funktionale Systemarchitektur berücksichtigen die Aspekte Bedienung, Missionsumsetzung, Karten, Lokalisierung, Umfeld- und Selbstwahrnehmung sowie Kooperation. Die Ergebnisse basieren unter anderem auf Erkenntnissen aus dem Projekt “Stadtpilot” der Technischen Universität Braunschweig
    corecore