5,978 research outputs found

    Optimizing coverage of simulated driving scenarios for the autonomous vehicle

    Get PDF
    International audienceSelf-driving cars and advanced driver-assistance systems are perceived as a game-changer in the future of road transportation. However, their validation is mandatory before industrialization; testing every component should be assessed intensively in order to mitigate potential failures and avoid unwanted problems on the road. In order to cover all possible scenarios, virtual simulations are used to complement real-test driving and aid in the validation process. This paper focuses on the validation of the command law during realistic virtual simulations. Its aim is to detect the maximum amount of failures while exploring the input search space of the scenarios. A key industrial restriction, however, is to launch simulations as little as possible in order to minimize computing power needed. Thus, a reduced model based on a random forest model helps in decreasing the number of simulations launched. It accompanies the algorithm in detecting the maximum amount of faulty scenarios everywhere in the search space. The methodology is tested on a tracking vehicle use case, which produces highly effective results

    Imitating Driver Behavior with Generative Adversarial Networks

    Full text link
    The ability to accurately predict and simulate human driving behavior is critical for the development of intelligent transportation systems. Traditional modeling methods have employed simple parametric models and behavioral cloning. This paper adopts a method for overcoming the problem of cascading errors inherent in prior approaches, resulting in realistic behavior that is robust to trajectory perturbations. We extend Generative Adversarial Imitation Learning to the training of recurrent policies, and we demonstrate that our model outperforms rule-based controllers and maximum likelihood models in realistic highway simulations. Our model both reproduces emergent behavior of human drivers, such as lane change rate, while maintaining realistic control over long time horizons.Comment: 8 pages, 6 figure

    Modeling and Design of Millimeter-Wave Networks for Highway Vehicular Communication

    Get PDF
    Connected and autonomous vehicles will play a pivotal role in future Intelligent Transportation Systems (ITSs) and smart cities, in general. High-speed and low-latency wireless communication links will allow municipalities to warn vehicles against safety hazards, as well as support cloud-driving solutions to drastically reduce traffic jams and air pollution. To achieve these goals, vehicles need to be equipped with a wide range of sensors generating and exchanging high rate data streams. Recently, millimeter wave (mmWave) techniques have been introduced as a means of fulfilling such high data rate requirements. In this paper, we model a highway communication network and characterize its fundamental link budget metrics. In particular, we specifically consider a network where vehicles are served by mmWave Base Stations (BSs) deployed alongside the road. To evaluate our highway network, we develop a new theoretical model that accounts for a typical scenario where heavy vehicles (such as buses and lorries) in slow lanes obstruct Line-of-Sight (LOS) paths of vehicles in fast lanes and, hence, act as blockages. Using tools from stochastic geometry, we derive approximations for the Signal-to-Interference-plus-Noise Ratio (SINR) outage probability, as well as the probability that a user achieves a target communication rate (rate coverage probability). Our analysis provides new design insights for mmWave highway communication networks. In considered highway scenarios, we show that reducing the horizontal beamwidth from 90∘90^\circ to 30∘30^\circ determines a minimal reduction in the SINR outage probability (namely, 4⋅10−24 \cdot 10^{-2} at maximum). Also, unlike bi-dimensional mmWave cellular networks, for small BS densities (namely, one BS every 500500 m) it is still possible to achieve an SINR outage probability smaller than 0.20.2.Comment: Accepted for publication in IEEE Transactions on Vehicular Technology -- Connected Vehicles Serie

    Parallel Multi-Hypothesis Algorithm for Criticality Estimation in Traffic and Collision Avoidance

    Full text link
    Due to the current developments towards autonomous driving and vehicle active safety, there is an increasing necessity for algorithms that are able to perform complex criticality predictions in real-time. Being able to process multi-object traffic scenarios aids the implementation of a variety of automotive applications such as driver assistance systems for collision prevention and mitigation as well as fall-back systems for autonomous vehicles. We present a fully model-based algorithm with a parallelizable architecture. The proposed algorithm can evaluate the criticality of complex, multi-modal (vehicles and pedestrians) traffic scenarios by simulating millions of trajectory combinations and detecting collisions between objects. The algorithm is able to estimate upcoming criticality at very early stages, demonstrating its potential for vehicle safety-systems and autonomous driving applications. An implementation on an embedded system in a test vehicle proves in a prototypical manner the compatibility of the algorithm with the hardware possibilities of modern cars. For a complex traffic scenario with 11 dynamic objects, more than 86 million pose combinations are evaluated in 21 ms on the GPU of a Drive PX~2

    Intelligent coverage path planning for agricultural robots and autonomous machines on three-dimensional terrain

    Get PDF

    Perception architecture exploration for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions

    Adv3D: Generating Safety-Critical 3D Objects through Closed-Loop Simulation

    Full text link
    Self-driving vehicles (SDVs) must be rigorously tested on a wide range of scenarios to ensure safe deployment. The industry typically relies on closed-loop simulation to evaluate how the SDV interacts on a corpus of synthetic and real scenarios and verify it performs properly. However, they primarily only test the system's motion planning module, and only consider behavior variations. It is key to evaluate the full autonomy system in closed-loop, and to understand how variations in sensor data based on scene appearance, such as the shape of actors, affect system performance. In this paper, we propose a framework, Adv3D, that takes real world scenarios and performs closed-loop sensor simulation to evaluate autonomy performance, and finds vehicle shapes that make the scenario more challenging, resulting in autonomy failures and uncomfortable SDV maneuvers. Unlike prior works that add contrived adversarial shapes to vehicle roof-tops or roadside to harm perception only, we optimize a low-dimensional shape representation to modify the vehicle shape itself in a realistic manner to degrade autonomy performance (e.g., perception, prediction, and motion planning). Moreover, we find that the shape variations found with Adv3D optimized in closed-loop are much more effective than those in open-loop, demonstrating the importance of finding scene appearance variations that affect autonomy in the interactive setting.Comment: CoRL 2023. Project page: https://waabi.ai/adv3d

    Enabling Robots to Communicate their Objectives

    Full text link
    The overarching goal of this work is to efficiently enable end-users to correctly anticipate a robot's behavior in novel situations. Since a robot's behavior is often a direct result of its underlying objective function, our insight is that end-users need to have an accurate mental model of this objective function in order to understand and predict what the robot will do. While people naturally develop such a mental model over time through observing the robot act, this familiarization process may be lengthy. Our approach reduces this time by having the robot model how people infer objectives from observed behavior, and then it selects those behaviors that are maximally informative. The problem of computing a posterior over objectives from observed behavior is known as Inverse Reinforcement Learning (IRL), and has been applied to robots learning human objectives. We consider the problem where the roles of human and robot are swapped. Our main contribution is to recognize that unlike robots, humans will not be exact in their IRL inference. We thus introduce two factors to define candidate approximate-inference models for human learning in this setting, and analyze them in a user study in the autonomous driving domain. We show that certain approximate-inference models lead to the robot generating example behaviors that better enable users to anticipate what it will do in novel situations. Our results also suggest, however, that additional research is needed in modeling how humans extrapolate from examples of robot behavior.Comment: RSS 201
    • …
    corecore