6,399 research outputs found
SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud
In this paper, we address semantic segmentation of road-objects from 3D LiDAR
point clouds. In particular, we wish to detect and categorize instances of
interest, such as cars, pedestrians and cyclists. We formulate this problem as
a point- wise classification problem, and propose an end-to-end pipeline called
SqueezeSeg based on convolutional neural networks (CNN): the CNN takes a
transformed LiDAR point cloud as input and directly outputs a point-wise label
map, which is then refined by a conditional random field (CRF) implemented as a
recurrent layer. Instance-level labels are then obtained by conventional
clustering algorithms. Our CNN model is trained on LiDAR point clouds from the
KITTI dataset, and our point-wise segmentation labels are derived from 3D
bounding boxes from KITTI. To obtain extra training data, we built a LiDAR
simulator into Grand Theft Auto V (GTA-V), a popular video game, to synthesize
large amounts of realistic training data. Our experiments show that SqueezeSeg
achieves high accuracy with astonishingly fast and stable runtime (8.7 ms per
frame), highly desirable for autonomous driving applications. Furthermore,
additionally training on synthesized data boosts validation accuracy on
real-world data. Our source code and synthesized data will be open-sourced
SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications
One major factor impeding more widespread adoption of deep neural networks
(DNNs) is their lack of robustness, which is essential for safety-critical
applications such as autonomous driving. This has motivated much recent work on
adversarial attacks for DNNs, which mostly focus on pixel-level perturbations
void of semantic meaning. In contrast, we present a general framework for
adversarial attacks on trained agents, which covers semantic perturbations to
the environment of the agent performing the task as well as pixel-level
attacks. To do this, we re-frame the adversarial attack problem as learning a
distribution of parameters that always fools the agent. In the semantic case,
our proposed adversary (denoted as BBGAN) is trained to sample parameters that
describe the environment with which the black-box agent interacts, such that
the agent performs its dedicated task poorly in this environment. We apply
BBGAN on three different tasks, primarily targeting aspects of autonomous
navigation: object detection, self-driving, and autonomous UAV racing. On these
tasks, BBGAN can generate failure cases that consistently fool a trained agent.Comment: Accepted at AAAI'2
Multimodal 3D Object Detection from Simulated Pretraining
The need for simulated data in autonomous driving applications has become
increasingly important, both for validation of pretrained models and for
training new models. In order for these models to generalize to real-world
applications, it is critical that the underlying dataset contains a variety of
driving scenarios and that simulated sensor readings closely mimics real-world
sensors. We present the Carla Automated Dataset Extraction Tool (CADET), a
novel tool for generating training data from the CARLA simulator to be used in
autonomous driving research. The tool is able to export high-quality,
synchronized LIDAR and camera data with object annotations, and offers
configuration to accurately reflect a real-life sensor array. Furthermore, we
use this tool to generate a dataset consisting of 10 000 samples and use this
dataset in order to train the 3D object detection network AVOD-FPN, with
finetuning on the KITTI dataset in order to evaluate the potential for
effective pretraining. We also present two novel LIDAR feature map
configurations in Bird's Eye View for use with AVOD-FPN that can be easily
modified. These configurations are tested on the KITTI and CADET datasets in
order to evaluate their performance as well as the usability of the simulated
dataset for pretraining. Although insufficient to fully replace the use of real
world data, and generally not able to exceed the performance of systems fully
trained on real data, our results indicate that simulated data can considerably
reduce the amount of training on real data required to achieve satisfactory
levels of accuracy.Comment: 12 pages, part of proceedings for the NAIS 2019 symposiu
- …