13 research outputs found
Object Detection and Classification in Occupancy Grid Maps using Deep Convolutional Networks
A detailed environment perception is a crucial component of automated
vehicles. However, to deal with the amount of perceived information, we also
require segmentation strategies. Based on a grid map environment
representation, well-suited for sensor fusion, free-space estimation and
machine learning, we detect and classify objects using deep convolutional
neural networks. As input for our networks we use a multi-layer grid map
efficiently encoding 3D range sensor information. The inference output consists
of a list of rotated bounding boxes with associated semantic classes. We
conduct extensive ablation studies, highlight important design considerations
when using grid maps and evaluate our models on the KITTI Bird's Eye View
benchmark. Qualitative and quantitative benchmark results show that we achieve
robust detection and state of the art accuracy solely using top-view grid maps
from range sensor data.Comment: 6 pages, 4 tables, 4 figure
Motion Estimation in Occupancy Grid Maps in Stationary Settings Using Recurrent Neural Networks
In this work, we tackle the problem of modeling the vehicle environment as
dynamic occupancy grid map in complex urban scenarios using recurrent neural
networks. Dynamic occupancy grid maps represent the scene in a bird's eye view,
where each grid cell contains the occupancy probability and the two dimensional
velocity. As input data, our approach relies on measurement grid maps, which
contain occupancy probabilities, generated with lidar measurements. Given this
configuration, we propose a recurrent neural network architecture to predict a
dynamic occupancy grid map, i.e. filtered occupancy and velocity of each cell,
by using a sequence of measurement grid maps. Our network architecture contains
convolutional long-short term memories in order to sequentially process the
input, makes use of spatial context, and captures motion. In the evaluation, we
quantify improvements in estimating the velocity of braking and turning
vehicles compared to the state-of-the-art. Additionally, we demonstrate that
our approach provides more consistent velocity estimates for dynamic objects,
as well as, less erroneous velocity estimates in static area.Comment: Accepted for presentation at the 2020 International Conference on
Robotics and Automation (ICRA), May 31 - June 4, 2020, Paris, Franc
Dynamic Occupancy Grid Mapping with Recurrent Neural Networks
Modeling and understanding the environment is an essential task for
autonomous driving. In addition to the detection of objects, in complex traffic
scenarios the motion of other road participants is of special interest.
Therefore, we propose to use a recurrent neural network to predict a dynamic
occupancy grid map, which divides the vehicle surrounding in cells, each
containing the occupancy probability and a velocity estimate. During training,
our network is fed with sequences of measurement grid maps, which encode the
lidar measurements of a single time step. Due to the combination of
convolutional and recurrent layers, our approach is capable to use spatial and
temporal information for the robust detection of static and dynamic
environment. In order to apply our approach with measurements from a moving
ego-vehicle, we propose a method for ego-motion compensation that is applicable
in neural network architectures with recurrent layers working on different
resolutions. In our evaluations, we compare our approach with a
state-of-the-art particle-based algorithm on a large publicly available dataset
to demonstrate the improved accuracy of velocity estimates and the more robust
separation of the environment in static and dynamic area. Additionally, we show
that our proposed method for ego-motion compensation leads to comparable
results in scenarios with stationary and with moving ego-vehicle.Comment: Accepted for presentation at the 2021 International Conference on
Robotics and Automation (ICRA), May 30 - June 5, 2021, Xi'an, Chin