592 research outputs found
RITA: Boost Autonomous Driving Simulators with Realistic Interactive Traffic Flow
High-quality traffic flow generation is the core module in building
simulators for autonomous driving. However, the majority of available
simulators are incapable of replicating traffic patterns that accurately
reflect the various features of real-world data while also simulating
human-like reactive responses to the tested autopilot driving strategies.
Taking one step forward to addressing such a problem, we propose Realistic
Interactive TrAffic flow (RITA) as an integrated component of existing driving
simulators to provide high-quality traffic flow for the evaluation and
optimization of the tested driving strategies. RITA is developed with
consideration of three key features, i.e., fidelity, diversity, and
controllability, and consists of two core modules called RITABackend and
RITAKit. RITABackend is built to support vehicle-wise control and provide
traffic generation models from real-world datasets, while RITAKit is developed
with easy-to-use interfaces for controllable traffic generation via
RITABackend. We demonstrate RITA's capacity to create diversified and
high-fidelity traffic simulations in several highly interactive highway
scenarios. The experimental findings demonstrate that our produced RITA traffic
flows exhibit all three key features, hence enhancing the completeness of
driving strategy evaluation. Moreover, we showcase the possibility for further
improvement of baseline strategies through online fine-tuning with RITA traffic
flows.Comment: 8 pages, 5 figures, 3 table
From Model-Based to Data-Driven Simulation: Challenges and Trends in Autonomous Driving
Simulation is an integral part in the process of developing autonomous
vehicles and advantageous for training, validation, and verification of driving
functions. Even though simulations come with a series of benefits compared to
real-world experiments, various challenges still prevent virtual testing from
entirely replacing physical test-drives. Our work provides an overview of these
challenges with regard to different aspects and types of simulation and
subsumes current trends to overcome them. We cover aspects around perception-,
behavior- and content-realism as well as general hurdles in the domain of
simulation. Among others, we observe a trend of data-driven, generative
approaches and high-fidelity data synthesis to increasingly replace model-based
simulation.Comment: Ferdinand M\"utsch, Helen Gremmelmaier, and Nicolas Becker
contributed equally. Accepted for publication at CVPR 2023 VCAD worksho
Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection
Adversarial attacks in the physical world can harm the robustness of
detection models. Evaluating the robustness of detection models in the physical
world can be challenging due to the time-consuming and labor-intensive nature
of many experiments. Thus, virtual simulation experiments can provide a
solution to this challenge. However, there is no unified detection benchmark
based on virtual simulation environment. To address this challenge, we proposed
an instant-level data generation pipeline based on the CARLA simulator. Using
this pipeline, we generated the DCI dataset and conducted extensive experiments
on three detection models and three physical adversarial attacks. The dataset
covers 7 continuous and 1 discrete scenes, with over 40 angles, 20 distances,
and 20,000 positions. The results indicate that Yolo v6 had strongest
resistance, with only a 6.59% average AP drop, and ASA was the most effective
attack algorithm with a 14.51% average AP reduction, twice that of other
algorithms. Static scenes had higher recognition AP, and results under
different weather conditions were similar. Adversarial attack algorithm
improvement may be approaching its 'limitation'.Comment: CVPR 2023 worksho
Sim2real and Digital Twins in Autonomous Driving: A Survey
Safety and cost are two important concerns for the development of autonomous
driving technologies. From the academic research to commercial applications of
autonomous driving vehicles, sufficient simulation and real world testing are
required. In general, a large scale of testing in simulation environment is
conducted and then the learned driving knowledge is transferred to the real
world, so how to adapt driving knowledge learned in simulation to reality
becomes a critical issue. However, the virtual simulation world differs from
the real world in many aspects such as lighting, textures, vehicle dynamics,
and agents' behaviors, etc., which makes it difficult to bridge the gap between
the virtual and real worlds. This gap is commonly referred to as the reality
gap (RG). In recent years, researchers have explored various approaches to
address the reality gap issue, which can be broadly classified into two
categories: transferring knowledge from simulation to reality (sim2real) and
learning in digital twins (DTs). In this paper, we consider the solutions
through the sim2real and DTs technologies, and review important applications
and innovations in the field of autonomous driving. Meanwhile, we show the
state-of-the-arts from the views of algorithms, models, and simulators, and
elaborate the development process from sim2real to DTs. The presentation also
illustrates the far-reaching effects of the development of sim2real and DTs in
autonomous driving
V2XP-ASG: Generating Adversarial Scenes for Vehicle-to-Everything Perception
Recent advancements in Vehicle-to-Everything communication technology have
enabled autonomous vehicles to share sensory information to obtain better
perception performance. With the rapid growth of autonomous vehicles and
intelligent infrastructure, the V2X perception systems will soon be deployed at
scale, which raises a safety-critical question: \textit{how can we evaluate and
improve its performance under challenging traffic scenarios before the
real-world deployment?} Collecting diverse large-scale real-world test scenes
seems to be the most straightforward solution, but it is expensive and
time-consuming, and the collections can only cover limited scenarios. To this
end, we propose the first open adversarial scene generator V2XP-ASG that can
produce realistic, challenging scenes for modern LiDAR-based multi-agent
perception systems. V2XP-ASG learns to construct an adversarial collaboration
graph and simultaneously perturb multiple agents' poses in an adversarial and
plausible manner. The experiments demonstrate that V2XP-ASG can effectively
identify challenging scenes for a large range of V2X perception systems.
Meanwhile, by training on the limited number of generated challenging scenes,
the accuracy of V2X perception systems can be further improved by 12.3\% on
challenging and 4\% on normal scenes. Our code will be released at
https://github.com/XHwind/V2XP-ASG.Comment: ICRA 2023, see https://github.com/XHwind/V2XP-AS
- …