26,560 research outputs found

    Testing Scenario Library Generation for Connected and Automated Vehicles, Part I: Methodology

    Full text link
    Testing and evaluation is a critical step in the development and deployment of connected and automated vehicles (CAVs), and yet there is no systematic framework to generate testing scenario library. This study aims to provide a general framework for the testing scenario library generation (TSLG) problem with different operational design domains (ODDs), CAV models, and performance metrics. Given an ODD, the testing scenario library is defined as a critical set of scenarios that can be used for CAV test. Each testing scenario is evaluated by a newly proposed measure, scenario criticality, which can be computed as a combination of maneuver challenge and exposure frequency. To search for critical scenarios, an auxiliary objective function is designed, and a multi-start optimization method along with seed-filling is applied. The proposed framework is theoretically proved to obtain accurate evaluation results with much fewer number of tests, if compared with the on-road test method. In part II of the study, three case studies are investigated to demonstrate the proposed methodologies. Reinforcement learning based technique is applied to enhance the searching method under high-dimensional scenarios.Comment: 11 pages,3 figure

    Dynamics of Driver's Gaze: Explorations in Behavior Modeling & Maneuver Prediction

    Full text link
    The study and modeling of driver's gaze dynamics is important because, if and how the driver is monitoring the driving environment is vital for driver assistance in manual mode, for take-over requests in highly automated mode and for semantic perception of the surround in fully autonomous mode. We developed a machine vision based framework to classify driver's gaze into context rich zones of interest and model driver's gaze behavior by representing gaze dynamics over a time period using gaze accumulation, glance duration and glance frequencies. As a use case, we explore the driver's gaze dynamic patterns during maneuvers executed in freeway driving, namely, left lane change maneuver, right lane change maneuver and lane keeping. It is shown that condensing gaze dynamics into durations and frequencies leads to recurring patterns based on driver activities. Furthermore, modeling these patterns show predictive powers in maneuver detection up to a few hundred milliseconds a priori

    Testing Scenario Library Generation for Connected and Automated Vehicles, Part II: Case Studies

    Full text link
    Testing scenario library generation (TSLG) is a critical step for the development and deployment of connected and automated vehicles (CAVs). In Part I of this study, a general methodology for TSLG is proposed, and theoretical properties are investigated regarding the accuracy and efficiency of CAV evaluation. This paper aims to provide implementation examples and guidelines, and to enhance the proposed methodology under high-dimensional scenarios. Three typical cases, including cut-in, highway-exit, and car-following, are designed and studied in this paper. For each case, the process of library generation and CAV evaluation is elaborated. To address the challenges brought by high dimensions, the proposed methodology is further enhanced by reinforcement learning technique. For all three cases, results show that the proposed methods can accelerate the CAV evaluation process by multiple magnitudes with same evaluation accuracy, if compared with the on-road test method.Comment: 12 pages, 13 figure

    Predicting Lane Keeping Behavior of Visually Distracted Drivers Using Inverse Suboptimal Control

    Full text link
    Driver distraction strongly contributes to crash-risk. Therefore, assistance systems that warn the driver if her distraction poses a hazard to road safety, promise a great safety benefit. Current approaches either seek to detect critical situations using environmental sensors or estimate a driver's attention state solely from her behavior. However, this neglects that driving situation, driver deficiencies and compensation strategies altogether determine the risk of an accident. This work proposes to use inverse suboptimal control to predict these aspects in visually distracted lane keeping. In contrast to other approaches, this allows a situation-dependent assessment of the risk posed by distraction. Real traffic data of seven drivers are used for evaluation of the predictive power of our approach. For comparison, a baseline was built using established behavior models. In the evaluation our method achieves a consistently lower prediction error over speed and track-topology variations. Additionally, our approach generalizes better to driving speeds unseen in training phase.Comment: 7 pages, 6 figures, accepted for 2016 IEEE Intelligent Vehicles Symposiu

    Deep Multi-Sensor Lane Detection

    Full text link
    Reliable and accurate lane detection has been a long-standing problem in the field of autonomous driving. In recent years, many approaches have been developed that use images (or videos) as input and reason in image space. In this paper we argue that accurate image estimates do not translate to precise 3D lane boundaries, which are the input required by modern motion planning algorithms. To address this issue, we propose a novel deep neural network that takes advantage of both LiDAR and camera sensors and produces very accurate estimates directly in 3D space. We demonstrate the performance of our approach on both highways and in cities, and show very accurate estimates in complex scenarios such as heavy traffic (which produces occlusion), fork, merges and intersections.Comment: IEEE International Conference on Intelligent Robots and Systems (IROS) 201

    Is it Safe to Drive? An Overview of Factors, Challenges, and Datasets for Driveability Assessment in Autonomous Driving

    Full text link
    With recent advances in learning algorithms and hardware development, autonomous cars have shown promise when operating in structured environments under good driving conditions. However, for complex, cluttered and unseen environments with high uncertainty, autonomous driving systems still frequently demonstrate erroneous or unexpected behaviors, that could lead to catastrophic outcomes. Autonomous vehicles should ideally adapt to driving conditions; while this can be achieved through multiple routes, it would be beneficial as a first step to be able to characterize Driveability in some quantified form. To this end, this paper aims to create a framework for investigating different factors that can impact driveability. Also, one of the main mechanisms to adapt autonomous driving systems to any driving condition is to be able to learn and generalize from representative scenarios. The machine learning algorithms that currently do so learn predominantly in a supervised manner and consequently need sufficient data for robust and efficient learning. Therefore, we also perform a comparative overview of 45 public driving datasets that enable learning and publish this dataset index at https://sites.google.com/view/driveability-survey-datasets. Specifically, we categorize the datasets according to use cases, and highlight the datasets that capture complicated and hazardous driving conditions which can be better used for training robust driving models. Furthermore, by discussions of what driving scenarios are not covered by existing public datasets and what driveability factors need more investigation and data acquisition, this paper aims to encourage both targeted dataset collection and the proposal of novel driveability metrics that enhance the robustness of autonomous cars in adverse environments

    Developing a Purely Visual Based Obstacle Detection using Inverse Perspective Mapping

    Full text link
    Our solution is implemented in and for the frame of Duckietown. The goal of Duckietown is to provide a relatively simple platform to explore, tackle and solve many problems linked to autonomous driving. "Duckietown" is simple in the basics, but an infinitely expandable environment. From controlling single driving Duckiebots until complete fleet management, every scenario is possible and can be put into practice. So far, none of the existing modules was capable of reliably detecting obstacles and reacting to them in real time. We faced the general problem of detecting obstacles given images from a monocular RGB camera mounted at the front of our Duckiebot and reacting to them properly without crashing or erroneously stopping the Duckiebot. Both, the detection as well as the reaction have to be implemented and have to run on a Raspberry Pi in real time. Due to the strong hardware limitations, we decided to not use any learning algorithms for the obstacle detection part. As it later transpired, a working "hard coded" software needs thorough analysis and understanding of the given problem. In layman's terms, we simply seek to make Duckietown a safer place.Comment: Project report and analysis for the Duckietown Project (https://www.duckietown.org/). 17 pages and 38 figure

    The AI Driving Olympics at NeurIPS 2018

    Full text link
    Despite recent breakthroughs, the ability of deep learning and reinforcement learning to outperform traditional approaches to control physically embodied robotic agents remains largely unproven. To help bridge this gap, we created the 'AI Driving Olympics' (AI-DO), a competition with the objective of evaluating the state of the art in machine learning and artificial intelligence for mobile robotics. Based on the simple and well specified autonomous driving and navigation environment called 'Duckietown', AI-DO includes a series of tasks of increasing complexity -- from simple lane-following to fleet management. For each task, we provide tools for competitors to use in the form of simulators, logs, code templates, baseline implementations and low-cost access to robotic hardware. We evaluate submissions in simulation online, on standardized hardware environments, and finally at the competition event. The first AI-DO, AI-DO 1, occurred at the Neural Information Processing Systems (NeurIPS) conference in December 2018. The results of AI-DO 1 highlight the need for better benchmarks, which are lacking in robotics, as well as improved mechanisms to bridge the gap between simulation and reality.Comment: Competition, robotics, safety-critical AI, self-driving cars, autonomous mobility on demand, Duckietow

    Adaptive Beaconing Approaches for Vehicular ad hoc Networks: A Survey

    Full text link
    Vehicular communication requires vehicles to self-organize through the exchange of periodic beacons. Recent analysis on beaconing indicates that the standards for beaconing restrict the desired performance of vehicular applications. This situation can be attributed to the quality of the available transmission medium, persistent change in the traffic situation and the inability of standards to cope with application requirements. To this end, this paper is motivated by the classifications and capability evaluations of existing adaptive beaconing approaches. To begin with, we explore the anatomy and the performance requirements of beaconing. Then, the beaconing design is analyzed to introduce a design-based beaconing taxonomy. A survey of the state-of-the-art is conducted with an emphasis on the salient features of the beaconing approaches. We also evaluate the capabilities of beaconing approaches using several key parameters. A comparison among beaconing approaches is presented, which is based on the architectural and implementation characteristics. The paper concludes by discussing open challenges in the field

    An End-to-End System for Crowdsourced 3d Maps for Autonomous Vehicles: The Mapping Component

    Full text link
    Autonomous vehicles rely on precise high definition (HD) 3d maps for navigation. This paper presents the mapping component of an end-to-end system for crowdsourcing precise 3d maps with semantically meaningful landmarks such as traffic signs (6 dof pose, shape and size) and traffic lanes (3d splines). The system uses consumer grade parts, and in particular, relies on a single front facing camera and a consumer grade GPS. Using real-time sign and lane triangulation on-device in the vehicle, with offline sign/lane clustering across multiple journeys and offline Bundle Adjustment across multiple journeys in the backend, we construct maps with mean absolute accuracy at sign corners of less than 20 cm from 25 journeys. To the best of our knowledge, this is the first end-to-end HD mapping pipeline in global coordinates in the automotive context using cost effective sensors
    corecore