12 research outputs found

    Moving Object Detection in the Environment of Mobile Robot

    Get PDF
    Táto práca rieši problém detekcie pohybujúcich sa objektov v okolí robota. Prostredie je reprezentované dvojrozmernou okupačnou mriežkou, ktorá obsahuje aktuálne viditeľné prostredie, bez filtrovania v čase. Ako samotný detektor pohybu slúži časticový filter založený na systéme v článku Grid-based Mapping and Tracking in Dynamic Environments using a Uniform Evidential Environment Representation, ktorý uviedol Tanzmeister a kolektív. Implementácia s využitím Robotického operačného systému poskytuje možnosť pre znovupoužitie modulov, z ktorých riešenie pozostáva. Ako zdroj LiDARových dát pre experimenty bola zvolená databáza KITTI Visual Odometry, ktorá obsahuje aj pózy vozidla. Mračná bodov boli predspracované vynechaním bodov ležiacich na zemi metódou Loopy Belief Propagation. Vytvorený detektor dokáže na sekvenciách databázy rozlišovať pohybujúce sa vozidlá. Pri testoch na simulovanom prostredí sa ukázali nedostatky detekcie v prípade pohybu veľkých súvislých objektov.This work's aim is movement detection in the environment of a robot, that may move itself. A 2D occupancy grid representation is used, containing only the currently visible environment, without filtering in time. Motion detection is based on a grid-based particle filter introduced by Tanzmeister et al. in Grid-based Mapping and Tracking in Dynamic Environments using a Uniform Evidential Environment Representation. The system was implemented in the Robot Operating System, which allows for re-use of modules which the solution is composed of. The KITTI Visual Odometry dataset was chosen as a source~of LiDAR data for experiments, along with ground-truth pose information. Ground segmentation based on Loopy Belief Propagation was used to filter the point clouds. The implemeted motion detector is able to distiguish between static and dynamic vehicles in this dataset. Further tests in a simulated environment have shown some shortcomings in the detection of large continuous moving objects.

    Online learning occupancy grid maps for mobile robots

    Get PDF
    Robot mapping is the basic work for robot navigation and path planning. Static map is also important to deal with dynamic environment. Occupancy grid maps are used to represent the environment. This paper focuses on the dependence between grid cells. We assume that if one point of the map is free, then the neighbors are likely to be free. This knowledge is encoded in a Markov random field (MRF) that is used as our prior belief about the world. Data from range sensors will then update our knowledge. By maximizing the posterior distribution of MRF model, a linear filter is generated. It can be used to filter the noise in observations or static maps. This linear filter can be implemented online. It is also additive if the sensor model is in the log odds form

    CNN for Very Fast Ground Segmentation in Velodyne LiDAR Data

    Full text link
    This paper presents a novel method for ground segmentation in Velodyne point clouds. We propose an encoding of sparse 3D data from the Velodyne sensor suitable for training a convolutional neural network (CNN). This general purpose approach is used for segmentation of the sparse point cloud into ground and non-ground points. The LiDAR data are represented as a multi-channel 2D signal where the horizontal axis corresponds to the rotation angle and the vertical axis the indexes channels (i.e. laser beams). Multiple topologies of relatively shallow CNNs (i.e. 3-5 convolutional layers) are trained and evaluated using a manually annotated dataset we prepared. The results show significant improvement of performance over the state-of-the-art method by Zhang et al. in terms of speed and also minor improvements in terms of accuracy.Comment: ICRA 2018 submissio

    Radar-based Dynamic Occupancy Grid Mapping and Object Detection

    Full text link
    Environment modeling utilizing sensor data fusion and object tracking is crucial for safe automated driving. In recent years, the classical occupancy grid map approach, which assumes a static environment, has been extended to dynamic occupancy grid maps, which maintain the possibility of a low-level data fusion while also estimating the position and velocity distribution of the dynamic local environment. This paper presents the further development of a previous approach. To the best of the author's knowledge, there is no publication about dynamic occupancy grid mapping with subsequent analysis based only on radar data. Therefore in this work, the data of multiple radar sensors are fused, and a grid-based object tracking and mapping method is applied. Subsequently, the clustering of dynamic areas provides high-level object information. For comparison, also a lidar-based method is developed. The approach is evaluated qualitatively and quantitatively with real-world data from a moving vehicle in urban environments. The evaluation illustrates the advantages of the radar-based dynamic occupancy grid map, considering different comparison metrics.Comment: Accepted to be published as part of the 23rd IEEE International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, September 20-23, 202

    Motion Estimation in Occupancy Grid Maps in Stationary Settings Using Recurrent Neural Networks

    Full text link
    In this work, we tackle the problem of modeling the vehicle environment as dynamic occupancy grid map in complex urban scenarios using recurrent neural networks. Dynamic occupancy grid maps represent the scene in a bird's eye view, where each grid cell contains the occupancy probability and the two dimensional velocity. As input data, our approach relies on measurement grid maps, which contain occupancy probabilities, generated with lidar measurements. Given this configuration, we propose a recurrent neural network architecture to predict a dynamic occupancy grid map, i.e. filtered occupancy and velocity of each cell, by using a sequence of measurement grid maps. Our network architecture contains convolutional long-short term memories in order to sequentially process the input, makes use of spatial context, and captures motion. In the evaluation, we quantify improvements in estimating the velocity of braking and turning vehicles compared to the state-of-the-art. Additionally, we demonstrate that our approach provides more consistent velocity estimates for dynamic objects, as well as, less erroneous velocity estimates in static area.Comment: Accepted for presentation at the 2020 International Conference on Robotics and Automation (ICRA), May 31 - June 4, 2020, Paris, Franc

    Dynamic Occupancy Grid Mapping with Recurrent Neural Networks

    Full text link
    Modeling and understanding the environment is an essential task for autonomous driving. In addition to the detection of objects, in complex traffic scenarios the motion of other road participants is of special interest. Therefore, we propose to use a recurrent neural network to predict a dynamic occupancy grid map, which divides the vehicle surrounding in cells, each containing the occupancy probability and a velocity estimate. During training, our network is fed with sequences of measurement grid maps, which encode the lidar measurements of a single time step. Due to the combination of convolutional and recurrent layers, our approach is capable to use spatial and temporal information for the robust detection of static and dynamic environment. In order to apply our approach with measurements from a moving ego-vehicle, we propose a method for ego-motion compensation that is applicable in neural network architectures with recurrent layers working on different resolutions. In our evaluations, we compare our approach with a state-of-the-art particle-based algorithm on a large publicly available dataset to demonstrate the improved accuracy of velocity estimates and the more robust separation of the environment in static and dynamic area. Additionally, we show that our proposed method for ego-motion compensation leads to comparable results in scenarios with stationary and with moving ego-vehicle.Comment: Accepted for presentation at the 2021 International Conference on Robotics and Automation (ICRA), May 30 - June 5, 2021, Xi'an, Chin
    corecore