911 research outputs found
Motion Estimation in Occupancy Grid Maps in Stationary Settings Using Recurrent Neural Networks
In this work, we tackle the problem of modeling the vehicle environment as
dynamic occupancy grid map in complex urban scenarios using recurrent neural
networks. Dynamic occupancy grid maps represent the scene in a bird's eye view,
where each grid cell contains the occupancy probability and the two dimensional
velocity. As input data, our approach relies on measurement grid maps, which
contain occupancy probabilities, generated with lidar measurements. Given this
configuration, we propose a recurrent neural network architecture to predict a
dynamic occupancy grid map, i.e. filtered occupancy and velocity of each cell,
by using a sequence of measurement grid maps. Our network architecture contains
convolutional long-short term memories in order to sequentially process the
input, makes use of spatial context, and captures motion. In the evaluation, we
quantify improvements in estimating the velocity of braking and turning
vehicles compared to the state-of-the-art. Additionally, we demonstrate that
our approach provides more consistent velocity estimates for dynamic objects,
as well as, less erroneous velocity estimates in static area.Comment: Accepted for presentation at the 2020 International Conference on
Robotics and Automation (ICRA), May 31 - June 4, 2020, Paris, Franc
Dynamic Occupancy Grid Mapping with Recurrent Neural Networks
Modeling and understanding the environment is an essential task for
autonomous driving. In addition to the detection of objects, in complex traffic
scenarios the motion of other road participants is of special interest.
Therefore, we propose to use a recurrent neural network to predict a dynamic
occupancy grid map, which divides the vehicle surrounding in cells, each
containing the occupancy probability and a velocity estimate. During training,
our network is fed with sequences of measurement grid maps, which encode the
lidar measurements of a single time step. Due to the combination of
convolutional and recurrent layers, our approach is capable to use spatial and
temporal information for the robust detection of static and dynamic
environment. In order to apply our approach with measurements from a moving
ego-vehicle, we propose a method for ego-motion compensation that is applicable
in neural network architectures with recurrent layers working on different
resolutions. In our evaluations, we compare our approach with a
state-of-the-art particle-based algorithm on a large publicly available dataset
to demonstrate the improved accuracy of velocity estimates and the more robust
separation of the environment in static and dynamic area. Additionally, we show
that our proposed method for ego-motion compensation leads to comparable
results in scenarios with stationary and with moving ego-vehicle.Comment: Accepted for presentation at the 2021 International Conference on
Robotics and Automation (ICRA), May 30 - June 5, 2021, Xi'an, Chin
End-to-End Tracking and Semantic Segmentation Using Recurrent Neural Networks
In this work we present a novel end-to-end framework for tracking and
classifying a robot's surroundings in complex, dynamic and only partially
observable real-world environments. The approach deploys a recurrent neural
network to filter an input stream of raw laser measurements in order to
directly infer object locations, along with their identity in both visible and
occluded areas. To achieve this we first train the network using unsupervised
Deep Tracking, a recently proposed theoretical framework for end-to-end space
occupancy prediction. We show that by learning to track on a large amount of
unsupervised data, the network creates a rich internal representation of its
environment which we in turn exploit through the principle of inductive
transfer of knowledge to perform the task of it's semantic classification. As a
result, we show that only a small amount of labelled data suffices to steer the
network towards mastering this additional task. Furthermore we propose a novel
recurrent neural network architecture specifically tailored to tracking and
semantic classification in real-world robotics applications. We demonstrate the
tracking and classification performance of the method on real-world data
collected at a busy road junction. Our evaluation shows that the proposed
end-to-end framework compares favourably to a state-of-the-art, model-free
tracking solution and that it outperforms a conventional one-shot training
scheme for semantic classification
Stochastic Occupancy Grid Map Prediction in Dynamic Scenes
This paper presents two variations of a novel stochastic prediction algorithm
that enables mobile robots to accurately and robustly predict the future state
of complex dynamic scenes. The proposed algorithm uses a variational
autoencoder to predict a range of possible future states of the environment.
The algorithm takes full advantage of the motion of the robot itself, the
motion of dynamic objects, and the geometry of static objects in the scene to
improve prediction accuracy. Three simulated and real-world datasets collected
by different robot models are used to demonstrate that the proposed algorithm
is able to achieve more accurate and robust prediction performance than other
prediction algorithms. Furthermore, a predictive uncertainty-aware planner is
proposed to demonstrate the effectiveness of the proposed predictor in
simulation and real-world navigation experiments. Implementations are open
source at https://github.com/TempleRAIL/SOGMP.Comment: Accepted by 7th Annual Conference on Robot Learning (CoRL), 202
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Radar-based Dynamic Occupancy Grid Mapping and Object Detection
Environment modeling utilizing sensor data fusion and object tracking is
crucial for safe automated driving. In recent years, the classical occupancy
grid map approach, which assumes a static environment, has been extended to
dynamic occupancy grid maps, which maintain the possibility of a low-level data
fusion while also estimating the position and velocity distribution of the
dynamic local environment. This paper presents the further development of a
previous approach. To the best of the author's knowledge, there is no
publication about dynamic occupancy grid mapping with subsequent analysis based
only on radar data. Therefore in this work, the data of multiple radar sensors
are fused, and a grid-based object tracking and mapping method is applied.
Subsequently, the clustering of dynamic areas provides high-level object
information. For comparison, also a lidar-based method is developed. The
approach is evaluated qualitatively and quantitatively with real-world data
from a moving vehicle in urban environments. The evaluation illustrates the
advantages of the radar-based dynamic occupancy grid map, considering different
comparison metrics.Comment: Accepted to be published as part of the 23rd IEEE International
Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece,
September 20-23, 202
- …