9,866 research outputs found
Radar-based Dynamic Occupancy Grid Mapping and Object Detection
Environment modeling utilizing sensor data fusion and object tracking is
crucial for safe automated driving. In recent years, the classical occupancy
grid map approach, which assumes a static environment, has been extended to
dynamic occupancy grid maps, which maintain the possibility of a low-level data
fusion while also estimating the position and velocity distribution of the
dynamic local environment. This paper presents the further development of a
previous approach. To the best of the author's knowledge, there is no
publication about dynamic occupancy grid mapping with subsequent analysis based
only on radar data. Therefore in this work, the data of multiple radar sensors
are fused, and a grid-based object tracking and mapping method is applied.
Subsequently, the clustering of dynamic areas provides high-level object
information. For comparison, also a lidar-based method is developed. The
approach is evaluated qualitatively and quantitatively with real-world data
from a moving vehicle in urban environments. The evaluation illustrates the
advantages of the radar-based dynamic occupancy grid map, considering different
comparison metrics.Comment: Accepted to be published as part of the 23rd IEEE International
Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece,
September 20-23, 202
Fully Convolutional Neural Networks for Dynamic Object Detection in Grid Maps
Grid maps are widely used in robotics to represent obstacles in the
environment and differentiating dynamic objects from static infrastructure is
essential for many practical applications. In this work, we present a methods
that uses a deep convolutional neural network (CNN) to infer whether grid cells
are covering a moving object or not. Compared to tracking approaches, that use
e.g. a particle filter to estimate grid cell velocities and then make a
decision for individual grid cells based on this estimate, our approach uses
the entire grid map as input image for a CNN that inspects a larger area around
each cell and thus takes the structural appearance in the grid map into account
to make a decision. Compared to our reference method, our concept yields a
performance increase from 83.9% to 97.2%. A runtime optimized version of our
approach yields similar improvements with an execution time of just 10
milliseconds.Comment: This is a shorter version of the masters thesis of Florian Piewak and
it was accapted at IV 201
End-to-End Tracking and Semantic Segmentation Using Recurrent Neural Networks
In this work we present a novel end-to-end framework for tracking and
classifying a robot's surroundings in complex, dynamic and only partially
observable real-world environments. The approach deploys a recurrent neural
network to filter an input stream of raw laser measurements in order to
directly infer object locations, along with their identity in both visible and
occluded areas. To achieve this we first train the network using unsupervised
Deep Tracking, a recently proposed theoretical framework for end-to-end space
occupancy prediction. We show that by learning to track on a large amount of
unsupervised data, the network creates a rich internal representation of its
environment which we in turn exploit through the principle of inductive
transfer of knowledge to perform the task of it's semantic classification. As a
result, we show that only a small amount of labelled data suffices to steer the
network towards mastering this additional task. Furthermore we propose a novel
recurrent neural network architecture specifically tailored to tracking and
semantic classification in real-world robotics applications. We demonstrate the
tracking and classification performance of the method on real-world data
collected at a busy road junction. Our evaluation shows that the proposed
end-to-end framework compares favourably to a state-of-the-art, model-free
tracking solution and that it outperforms a conventional one-shot training
scheme for semantic classification
- …