1,402 research outputs found
Adversarial Objects Against LiDAR-Based Autonomous Driving Systems
Deep neural networks (DNNs) are found to be vulnerable against adversarial
examples, which are carefully crafted inputs with a small magnitude of
perturbation aiming to induce arbitrarily incorrect predictions. Recent studies
show that adversarial examples can pose a threat to real-world
security-critical applications: a "physical adversarial Stop Sign" can be
synthesized such that the autonomous driving cars will misrecognize it as
others (e.g., a speed limit sign). However, these image-space adversarial
examples cannot easily alter 3D scans of widely equipped LiDAR or radar on
autonomous vehicles. In this paper, we reveal the potential vulnerabilities of
LiDAR-based autonomous driving detection systems, by proposing an optimization
based approach LiDAR-Adv to generate adversarial objects that can evade the
LiDAR-based detection system under various conditions. We first show the
vulnerabilities using a blackbox evolution-based algorithm, and then explore
how much a strong adversary can do, using our gradient-based approach
LiDAR-Adv. We test the generated adversarial objects on the Baidu Apollo
autonomous driving platform and show that such physical systems are indeed
vulnerable to the proposed attacks. We also 3D-print our adversarial objects
and perform physical experiments to illustrate that such vulnerability exists
in the real world. Please find more visualizations and results on the anonymous
website: https://sites.google.com/view/lidar-adv
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward
Connected and autonomous vehicles (CAVs) will form the backbone of future
next-generation intelligent transportation systems (ITS) providing travel
comfort, road safety, along with a number of value-added services. Such a
transformation---which will be fuelled by concomitant advances in technologies
for machine learning (ML) and wireless communications---will enable a future
vehicular ecosystem that is better featured and more efficient. However, there
are lurking security problems related to the use of ML in such a critical
setting where an incorrect ML decision may not only be a nuisance but can lead
to loss of precious lives. In this paper, we present an in-depth overview of
the various challenges associated with the application of ML in vehicular
networks. In addition, we formulate the ML pipeline of CAVs and present various
potential security issues associated with the adoption of ML methods. In
particular, we focus on the perspective of adversarial ML attacks on CAVs and
outline a solution to defend against adversarial attacks in multiple settings
Learning 2D to 3D Lifting for Object Detection in 3D for Autonomous Vehicles
We address the problem of 3D object detection from 2D monocular images in
autonomous driving scenarios. We propose to lift the 2D images to 3D
representations using learned neural networks and leverage existing networks
working directly on 3D data to perform 3D object detection and localization. We
show that, with carefully designed training mechanism and automatically
selected minimally noisy data, such a method is not only feasible, but gives
higher results than many methods working on actual 3D inputs acquired from
physical sensors. On the challenging KITTI benchmark, we show that our 2D to 3D
lifted method outperforms many recent competitive 3D networks while
significantly outperforming previous state-of-the-art for 3D detection from
monocular images. We also show that a late fusion of the output of the network
trained on generated 3D images, with that trained on real 3D images, improves
performance. We find the results very interesting and argue that such a method
could serve as a highly reliable backup in case of malfunction of expensive 3D
sensors, if not potentially making them redundant, at least in the case of low
human injury risk autonomous navigation scenarios like warehouse automation
WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving
Fisheye cameras are commonly employed for obtaining a large field of view in
surveillance, augmented reality and in particular automotive applications. In
spite of their prevalence, there are few public datasets for detailed
evaluation of computer vision algorithms on fisheye images. We release the
first extensive fisheye automotive dataset, WoodScape, named after Robert Wood
who invented the fisheye camera in 1906. WoodScape comprises of four surround
view cameras and nine tasks including segmentation, depth estimation, 3D
bounding box detection and soiling detection. Semantic annotation of 40 classes
at the instance level is provided for over 10,000 images and annotation for
other tasks are provided for over 100,000 images. With WoodScape, we would like
to encourage the community to adapt computer vision models for fisheye camera
instead of using naive rectification.Comment: Accepted for Oral Presentation at IEEE International Conference on
Computer Vision (ICCV) 2019. Please refer to our website
https://woodscape.valeo.com and https://github.com/valeoai/woodscape for
release status and update
Gated2Depth: Real-time Dense Lidar from Gated Images
We present an imaging framework which converts three images from a gated
camera into high-resolution depth maps with depth accuracy comparable to pulsed
lidar measurements. Existing scanning lidar systems achieve low spatial
resolution at large ranges due to mechanically-limited angular sampling rates,
restricting scene understanding tasks to close-range clusters with dense
sampling. Moreover, today's pulsed lidar scanners suffer from high cost, power
consumption, large form-factors, and they fail in the presence of strong
backscatter. We depart from point scanning and demonstrate that it is possible
to turn a low-cost CMOS gated imager into a dense depth camera with at least
80m range - by learning depth from three gated images. The proposed
architecture exploits semantic context across gated slices, and is trained on a
synthetic discriminator loss without the need of dense depth labels. The
proposed replacement for scanning lidar systems is real-time, handles
back-scatter and provides dense depth at long ranges. We validate our approach
in simulation and on real-world data acquired over 4,000km driving in northern
Europe. Data and code are available at https://github.com/gruberto/Gated2Depth.Comment: ICCV 2019 (oral), Authorship changed due to ICCV polic
Rearchitecting Classification Frameworks For Increased Robustness
While generalizing well over natural inputs, neural networks are vulnerable
to adversarial inputs. Existing defenses against adversarial inputs have
largely been detached from the real world. These defenses also come at a cost
to accuracy. Fortunately, there are invariances of an object that are its
salient features; when we break them it will necessarily change the perception
of the object. We find that applying invariants to the classification task
makes robustness and accuracy feasible together. Two questions follow: how to
extract and model these invariances? and how to design a classification
paradigm that leverages these invariances to improve the robustness accuracy
trade-off? The remainder of the paper discusses solutions to the aformenetioned
questions
DARTS: Deceiving Autonomous Cars with Toxic Signs
Sign recognition is an integral part of autonomous cars. Any
misclassification of traffic signs can potentially lead to a multitude of
disastrous consequences, ranging from a life-threatening accident to even a
large-scale interruption of transportation services relying on autonomous cars.
In this paper, we propose and examine security attacks against sign recognition
systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed
attacks DARTS). In particular, we introduce two novel methods to create these
toxic signs. First, we propose Out-of-Distribution attacks, which expand the
scope of adversarial examples by enabling the adversary to generate these
starting from an arbitrary point in the image space compared to prior attacks
which are restricted to existing training/test data (In-Distribution). Second,
we present the Lenticular Printing attack, which relies on an optical
phenomenon to deceive the traffic sign recognition system. We extensively
evaluate the effectiveness of the proposed attacks in both virtual and
real-world settings and consider both white-box and black-box threat models.
Our results demonstrate that the proposed attacks are successful under both
settings and threat models. We further show that Out-of-Distribution attacks
can outperform In-Distribution attacks on classifiers defended using the
adversarial training defense, exposing a new attack vector for these defenses.Comment: Submitted to ACM CCS 2018; Extended version of [1801.02780] Rogue
Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logo
Randomized Adversarial Imitation Learning for Autonomous Driving
With the evolution of various advanced driver assistance system (ADAS)
platforms, the design of autonomous driving system is becoming more complex and
safety-critical. The autonomous driving system simultaneously activates
multiple ADAS functions; and thus it is essential to coordinate various ADAS
functions. This paper proposes a randomized adversarial imitation learning
(RAIL) method that imitates the coordination of autonomous vehicle equipped
with advanced sensors. The RAIL policies are trained through derivative-free
optimization for the decision maker that coordinates the proper ADAS functions,
e.g., smart cruise control and lane keeping system. Especially, the proposed
method is also able to deal with the LIDAR data and makes decisions in complex
multi-lane highways and multi-agent environments
Vision-based Navigation of Autonomous Vehicle in Roadway Environments with Unexpected Hazards
Vision-based navigation of autonomous vehicles primarily depends on the Deep
Neural Network (DNN) based systems in which the controller obtains input from
sensors/detectors, such as cameras and produces a vehicle control output, such
as a steering wheel angle to navigate the vehicle safely in a roadway traffic
environment. Typically, these DNN-based systems of the autonomous vehicle are
trained through supervised learning; however, recent studies show that a
trained DNN-based system can be compromised by perturbation or adversarial
inputs. Similarly, this perturbation can be introduced into the DNN-based
systems of autonomous vehicle by unexpected roadway hazards, such as debris and
roadblocks. In this study, we first introduce a roadway hazardous environment
(both intentional and unintentional roadway hazards) that can compromise the
DNN-based navigational system of an autonomous vehicle, and produces an
incorrect steering wheel angle, which can cause crashes resulting in fatality
and injury. Then, we develop a DNN-based autonomous vehicle driving system
using object detection and semantic segmentation to mitigate the adverse effect
of this type of hazardous environment, which helps the autonomous vehicle to
navigate safely around such hazards. We find that our developed DNN-based
autonomous vehicle driving system including hazardous object detection and
semantic segmentation improves the navigational ability of an autonomous
vehicle to avoid a potential hazard by 21% compared to the traditional
DNN-based autonomous vehicle driving system.Comment: 17 pages, 12 image
Yes, we GAN: Applying Adversarial Techniques for Autonomous Driving
Generative Adversarial Networks (GAN) have gained a lot of popularity from
their introduction in 2014 till present. Research on GAN is rapidly growing and
there are many variants of the original GAN focusing on various aspects of deep
learning. GAN are perceived as the most impactful direction of machine learning
in the last decade. This paper focuses on the application of GAN in autonomous
driving including topics such as advanced data augmentation, loss function
learning, semi-supervised learning, etc. We formalize and review key
applications of adversarial techniques and discuss challenges and open problems
to be addressed.Comment: Accepted for publication in Electronic Imaging, Autonomous Vehicles
and Machines 2019. arXiv admin note: text overlap with arXiv:1606.05908 by
other author
- …