7,534 research outputs found
Dynamic Occupancy Grid Prediction for Urban Autonomous Driving: A Deep Learning Approach with Fully Automatic Labeling
Long-term situation prediction plays a crucial role in the development of
intelligent vehicles. A major challenge still to overcome is the prediction of
complex downtown scenarios with multiple road users, e.g., pedestrians, bikes,
and motor vehicles, interacting with each other. This contribution tackles this
challenge by combining a Bayesian filtering technique for environment
representation, and machine learning as long-term predictor. More specifically,
a dynamic occupancy grid map is utilized as input to a deep convolutional
neural network. This yields the advantage of using spatially distributed
velocity estimates from a single time step for prediction, rather than a raw
data sequence, alleviating common problems dealing with input time series of
multiple sensors. Furthermore, convolutional neural networks have the inherent
characteristic of using context information, enabling the implicit modeling of
road user interaction. Pixel-wise balancing is applied in the loss function
counteracting the extreme imbalance between static and dynamic cells. One of
the major advantages is the unsupervised learning character due to fully
automatic label generation. The presented algorithm is trained and evaluated on
multiple hours of recorded sensor data and compared to Monte-Carlo simulation
A Joint 3D-2D based Method for Free Space Detection on Roads
In this paper, we address the problem of road segmentation and free space
detection in the context of autonomous driving. Traditional methods either use
3-dimensional (3D) cues such as point clouds obtained from LIDAR, RADAR or
stereo cameras or 2-dimensional (2D) cues such as lane markings, road
boundaries and object detection. Typical 3D point clouds do not have enough
resolution to detect fine differences in heights such as between road and
pavement. Image based 2D cues fail when encountering uneven road textures such
as due to shadows, potholes, lane markings or road restoration. We propose a
novel free road space detection technique combining both 2D and 3D cues. In
particular, we use CNN based road segmentation from 2D images and plane/box
fitting on sparse depth data obtained from SLAM as priors to formulate an
energy minimization using conditional random field (CRF), for road pixels
classification. While the CNN learns the road texture and is unaffected by
depth boundaries, the 3D information helps in overcoming texture based
classification failures. Finally, we use the obtained road segmentation with
the 3D depth data from monocular SLAM to detect the free space for the
navigation purposes. Our experiments on KITTI odometry dataset, Camvid dataset,
as well as videos captured by us, validate the superiority of the proposed
approach over the state of the art.Comment: Accepted for publication at IEEE WACV 201
Distributed environmental monitoring
With increasingly ubiquitous use of web-based technologies in society today, autonomous sensor networks represent the future in large-scale information acquisition for applications ranging from environmental monitoring to in vivo sensing. This chapter presents a range of on-going projects with an emphasis on environmental sensing; relevant literature pertaining to sensor networks is reviewed, validated sensing applications are described and the contribution of high-resolution temporal data to better decision-making is discussed
Hallucinating dense optical flow from sparse lidar for autonomous vehicles
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper we propose a novel approach to estimate dense optical flow from sparse lidar data acquired on an autonomous vehicle. This is intended to be used as a drop-in replacement of any image-based optical flow system when images are not reliable due to e.g. adverse weather conditions or at night. In order to infer high resolution 2D flows from discrete range data we devise a three-block architecture of multiscale filters that combines multiple intermediate objectives, both in the lidar and image domain. To train this network we introduce a dataset with approximately 20K lidar samples of the Kitti dataset which we have augmented with a pseudo ground-truth image-based optical flow computed using FlowNet2. We demonstrate the effectiveness of our approach on Kitti, and show that despite using the low-resolution and sparse measurements of the lidar, we can regress dense optical flow maps which are at par with those estimated with image-based methods.Peer ReviewedPostprint (author's final draft
The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems
Scenario-based testing for the safety validation of highly automated vehicles
is a promising approach that is being examined in research and industry. This
approach heavily relies on data from real-world scenarios to derive the
necessary scenario information for testing. Measurement data should be
collected at a reasonable effort, contain naturalistic behavior of road users
and include all data relevant for a description of the identified scenarios in
sufficient quality. However, the current measurement methods fail to meet at
least one of the requirements. Thus, we propose a novel method to measure data
from an aerial perspective for scenario-based validation fulfilling the
mentioned requirements. Furthermore, we provide a large-scale naturalistic
vehicle trajectory dataset from German highways called highD. We evaluate the
data in terms of quantity, variety and contained scenarios. Our dataset
consists of 16.5 hours of measurements from six locations with 110 000
vehicles, a total driven distance of 45 000 km and 5600 recorded complete lane
changes. The highD dataset is available online at: http://www.highD-dataset.comComment: IEEE International Conference on Intelligent Transportation Systems
(ITSC) 201
A Survey on Datasets for Decision-making of Autonomous Vehicle
Autonomous vehicles (AV) are expected to reshape future transportation
systems, and decision-making is one of the critical modules toward high-level
automated driving. To overcome those complicated scenarios that rule-based
methods could not cope with well, data-driven decision-making approaches have
aroused more and more focus. The datasets to be used in developing data-driven
methods dramatically influences the performance of decision-making, hence it is
necessary to have a comprehensive insight into the existing datasets. From the
aspects of collection sources, driving data can be divided into vehicle,
environment, and driver related data. This study compares the state-of-the-art
datasets of these three categories and summarizes their features including
sensors used, annotation, and driving scenarios. Based on the characteristics
of the datasets, this survey also concludes the potential applications of
datasets on various aspects of AV decision-making, assisting researchers to
find appropriate ones to support their own research. The future trends of AV
dataset development are summarized
- …