1,101 research outputs found
Stochastic Occupancy Grid Map Prediction in Dynamic Scenes
This paper presents two variations of a novel stochastic prediction algorithm
that enables mobile robots to accurately and robustly predict the future state
of complex dynamic scenes. The proposed algorithm uses a variational
autoencoder to predict a range of possible future states of the environment.
The algorithm takes full advantage of the motion of the robot itself, the
motion of dynamic objects, and the geometry of static objects in the scene to
improve prediction accuracy. Three simulated and real-world datasets collected
by different robot models are used to demonstrate that the proposed algorithm
is able to achieve more accurate and robust prediction performance than other
prediction algorithms. Furthermore, a predictive uncertainty-aware planner is
proposed to demonstrate the effectiveness of the proposed predictor in
simulation and real-world navigation experiments. Implementations are open
source at https://github.com/TempleRAIL/SOGMP.Comment: Accepted by 7th Annual Conference on Robot Learning (CoRL), 202
Stereo vision-based obstacle avoidance module on 3D point cloud data
This paper deals in building a 3D vision-based obstacle avoidance and navigation. In order for an autonomous system to work in real life condition, a capability of gaining surrounding environment data, interpret the data and take appropriate action is needed. One of the required capability in this matter for an autonomous system is a capability to navigate cluttered, unorganized environment and avoiding collision with any present obstacle, defined as any data with vertical orientation and able to take decision when environment update exist. Proposed in this work are two-step strategy of extracting the obstacle position and orientation from point cloud data using plane based segmentation and the resultant segmentation are mapped based on obstacle point position relative to camera using occupancy grid map to acquire obstacle cluster position and recorded the occupancy grid map for future use and global navigation, obstacle position gained in grid map is used to plan the navigation path towards target goal without going through obstacle position and modify the navigation path to avoid collision when environment update is present or platform movement is not aligned with navigation path based on timed elastic band method
Radar-based Dynamic Occupancy Grid Mapping and Object Detection
Environment modeling utilizing sensor data fusion and object tracking is
crucial for safe automated driving. In recent years, the classical occupancy
grid map approach, which assumes a static environment, has been extended to
dynamic occupancy grid maps, which maintain the possibility of a low-level data
fusion while also estimating the position and velocity distribution of the
dynamic local environment. This paper presents the further development of a
previous approach. To the best of the author's knowledge, there is no
publication about dynamic occupancy grid mapping with subsequent analysis based
only on radar data. Therefore in this work, the data of multiple radar sensors
are fused, and a grid-based object tracking and mapping method is applied.
Subsequently, the clustering of dynamic areas provides high-level object
information. For comparison, also a lidar-based method is developed. The
approach is evaluated qualitatively and quantitatively with real-world data
from a moving vehicle in urban environments. The evaluation illustrates the
advantages of the radar-based dynamic occupancy grid map, considering different
comparison metrics.Comment: Accepted to be published as part of the 23rd IEEE International
Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece,
September 20-23, 202
Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground
Fully Convolutional Neural Networks for Dynamic Object Detection in Grid Maps
Grid maps are widely used in robotics to represent obstacles in the
environment and differentiating dynamic objects from static infrastructure is
essential for many practical applications. In this work, we present a methods
that uses a deep convolutional neural network (CNN) to infer whether grid cells
are covering a moving object or not. Compared to tracking approaches, that use
e.g. a particle filter to estimate grid cell velocities and then make a
decision for individual grid cells based on this estimate, our approach uses
the entire grid map as input image for a CNN that inspects a larger area around
each cell and thus takes the structural appearance in the grid map into account
to make a decision. Compared to our reference method, our concept yields a
performance increase from 83.9% to 97.2%. A runtime optimized version of our
approach yields similar improvements with an execution time of just 10
milliseconds.Comment: This is a shorter version of the masters thesis of Florian Piewak and
it was accapted at IV 201
- ā¦