16 research outputs found

    Hybrid Sampling Bayesian Occupancy Filter

    No full text
    International audienceModeling and monitoring dynamic environments is a complex task but is crucial in the field of intelligent vehicle. A traditional way of addressing these issues is the modeling of moving objects, through Detection And Tracking of Moving Objects (DATMO) methods. An alternative to a classic object model framework is the occupancy grid filtering domain. Instead of segmenting the scene into objects and track them, the environment is represented as a regular grid of occupancy, in which each cell is tracked at a sub-object level. The Bayesian Occupancy Filter is a generic occupancy grid framework which predicts the spread of spatial occupancy by estimating cell velocity distributions. However its velocity model, corresponding to a transition histogram per cell, leads to huge data management which in practice makes it hardly compatible to severe computational and hardware constraints, like in many embedded systems. In this paper, we present a new representation for the BOF, describing the environment through a mix of static and dynamic occupancy. This differentiation enables the use of a model adapted to the considered nature: static occupancy is described in a classic occupancy grid, while dynamic occupancy is modeled by a set of moving particles. Both static and dynamic parts are jointly generated and evaluated, their distribution over the cells being adjusted. This approach leads to a more compact model and to drastically improve the accuracy of the results, in particular in term of velocities. Experimental results show that the number of values required to model the velocities have been reduced from a typical 900 per cell (for a 30x30 neighborhood) to less than 2 per cell in average. The massive data compression allows to plan dedicated embedded devices

    Radar-based Dynamic Occupancy Grid Mapping and Object Detection

    Full text link
    Environment modeling utilizing sensor data fusion and object tracking is crucial for safe automated driving. In recent years, the classical occupancy grid map approach, which assumes a static environment, has been extended to dynamic occupancy grid maps, which maintain the possibility of a low-level data fusion while also estimating the position and velocity distribution of the dynamic local environment. This paper presents the further development of a previous approach. To the best of the author's knowledge, there is no publication about dynamic occupancy grid mapping with subsequent analysis based only on radar data. Therefore in this work, the data of multiple radar sensors are fused, and a grid-based object tracking and mapping method is applied. Subsequently, the clustering of dynamic areas provides high-level object information. For comparison, also a lidar-based method is developed. The approach is evaluated qualitatively and quantitatively with real-world data from a moving vehicle in urban environments. The evaluation illustrates the advantages of the radar-based dynamic occupancy grid map, considering different comparison metrics.Comment: Accepted to be published as part of the 23rd IEEE International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, September 20-23, 202

    Hybrid sampling Bayesian Occupancy Filter

    Full text link

    Motion Estimation in Occupancy Grid Maps in Stationary Settings Using Recurrent Neural Networks

    Full text link
    In this work, we tackle the problem of modeling the vehicle environment as dynamic occupancy grid map in complex urban scenarios using recurrent neural networks. Dynamic occupancy grid maps represent the scene in a bird's eye view, where each grid cell contains the occupancy probability and the two dimensional velocity. As input data, our approach relies on measurement grid maps, which contain occupancy probabilities, generated with lidar measurements. Given this configuration, we propose a recurrent neural network architecture to predict a dynamic occupancy grid map, i.e. filtered occupancy and velocity of each cell, by using a sequence of measurement grid maps. Our network architecture contains convolutional long-short term memories in order to sequentially process the input, makes use of spatial context, and captures motion. In the evaluation, we quantify improvements in estimating the velocity of braking and turning vehicles compared to the state-of-the-art. Additionally, we demonstrate that our approach provides more consistent velocity estimates for dynamic objects, as well as, less erroneous velocity estimates in static area.Comment: Accepted for presentation at the 2020 International Conference on Robotics and Automation (ICRA), May 31 - June 4, 2020, Paris, Franc

    Dynamic Occupancy Grid Mapping with Recurrent Neural Networks

    Full text link
    Modeling and understanding the environment is an essential task for autonomous driving. In addition to the detection of objects, in complex traffic scenarios the motion of other road participants is of special interest. Therefore, we propose to use a recurrent neural network to predict a dynamic occupancy grid map, which divides the vehicle surrounding in cells, each containing the occupancy probability and a velocity estimate. During training, our network is fed with sequences of measurement grid maps, which encode the lidar measurements of a single time step. Due to the combination of convolutional and recurrent layers, our approach is capable to use spatial and temporal information for the robust detection of static and dynamic environment. In order to apply our approach with measurements from a moving ego-vehicle, we propose a method for ego-motion compensation that is applicable in neural network architectures with recurrent layers working on different resolutions. In our evaluations, we compare our approach with a state-of-the-art particle-based algorithm on a large publicly available dataset to demonstrate the improved accuracy of velocity estimates and the more robust separation of the environment in static and dynamic area. Additionally, we show that our proposed method for ego-motion compensation leads to comparable results in scenarios with stationary and with moving ego-vehicle.Comment: Accepted for presentation at the 2021 International Conference on Robotics and Automation (ICRA), May 30 - June 5, 2021, Xi'an, Chin

    Hybrid Sampling Bayesian Occupancy Filter

    Get PDF
    International audienceModeling and monitoring dynamic environments is a complex task but is crucial in the field of intelligent vehicle. A traditional way of addressing these issues is the modeling of moving objects, through Detection And Tracking of Moving Objects (DATMO) methods. An alternative to a classic object model framework is the occupancy grid filtering domain. Instead of segmenting the scene into objects and track them, the environment is represented as a regular grid of occupancy, in which each cell is tracked at a sub-object level. The Bayesian Occupancy Filter is a generic occupancy grid framework which predicts the spread of spatial occupancy by estimating cell velocity distributions. However its velocity model, corresponding to a transition histogram per cell, leads to huge data management which in practice makes it hardly compatible to severe computational and hardware constraints, like in many embedded systems. In this paper, we present a new representation for the BOF, describing the environment through a mix of static and dynamic occupancy. This differentiation enables the use of a model adapted to the considered nature: static occupancy is described in a classic occupancy grid, while dynamic occupancy is modeled by a set of moving particles. Both static and dynamic parts are jointly generated and evaluated, their distribution over the cells being adjusted. This approach leads to a more compact model and to drastically improve the accuracy of the results, in particular in term of velocities. Experimental results show that the number of values required to model the velocities have been reduced from a typical 900 per cell (for a 30x30 neighborhood) to less than 2 per cell in average. The massive data compression allows to plan dedicated embedded devices
    corecore